Skip to content

Comments

SWFOC: proven-core extender reconciliation + ROE hard gate#49

Open
Prekzursil wants to merge 9 commits intomainfrom
rd/deep-re-universal-sdk
Open

SWFOC: proven-core extender reconciliation + ROE hard gate#49
Prekzursil wants to merge 9 commits intomainfrom
rd/deep-re-universal-sdk

Conversation

@Prekzursil
Copy link
Owner

@Prekzursil Prekzursil commented Feb 18, 2026

User description

Summary

  • Reconciles rd/deep-re-universal-sdk to proven runtime/extender core only.
  • Preserves strict fail-closed routing for set_credits with extender-first execution.
  • Adds deterministic Windows native host build resolution for VS18 and strict ROE live hard-gate validation flow.

Affected Profiles

  • base_swfoc
  • aotr_1397421866_swfoc
  • roe_3447786229_swfoc

Evidence

  • runId: 20260218-213838
  • classification: passed
  • repro bundle json: TestResults/runs/20260218-213838/repro-bundle.json
  • repro bundle markdown: TestResults/runs/20260218-213838/repro-bundle.md

Reason-Code Diagnostics

  • attach/host: selectedHostProcess.name=StarWarsG.exe
  • backend route: backendRouteDecision.backend=Extender
  • route reason: backendRouteDecision.reasonCode=CAPABILITY_PROBE_PASS
  • capability reason: capabilityProbeSnapshot.probeReasonCode=CAPABILITY_PROBE_PASS
  • hook state: hookInstallReport.state=HOOK_ONESHOT

Validation

  • dotnet restore SwfocTrainer.sln
  • dotnet build SwfocTrainer.sln -c Release
  • dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName!~SwfocTrainer.Tests.Profiles.Live&FullyQualifiedName!~RuntimeAttachSmokeTests"
  • powershell.exe -File tools/validate-policy-contracts.ps1
  • powershell.exe -File tools/validate-binary-fingerprint.ps1 -Strict -FingerprintPath tools/fixtures/binary_fingerprint_sample.json
  • powershell.exe -File tools/validate-signature-pack.ps1 -Strict -SignaturePackPath tools/fixtures/signature_pack_sample.json
  • powershell.exe -File tools/run-live-validation.ps1 -Configuration Release -NoBuild -Scope ROE -Strict -RequireNonBlockedClassification
  • powershell.exe -File tools/validate-repro-bundle.ps1 -Strict -BundlePath TestResults/runs/20260218-213838/repro-bundle.json

PR Type

Enhancement, Tests


Description

  • Implements comprehensive SDK operation routing with fail-closed safety gates for extender backend execution

  • Adds universal profile variant resolution with confidence scoring and process detection heuristics

  • Introduces native extender bridge with named-pipe communication for capability probing and credits execution

  • Implements binary fingerprint capture and capability map resolution for deterministic operation availability

  • Adds backend routing decision logic with auto-fallback from extender to legacy paths based on capability state

  • Integrates runtime mode probing from symbol health indicators to disambiguate tactical vs galactic execution modes

  • Adds comprehensive test coverage for fingerprint service, capability maps, profile variants, and backend routing

  • Implements profile inheritance merging for backend preferences, host preferences, and capability requirements

  • Adds native Windows build orchestration with CMake generator auto-detection and Visual Studio resolution

  • Implements live validation hard gate with TRX polling and blocking classification detection

  • Updates profile format and architecture documentation for tiered backend routing and SDK execution paths

  • Adds schema definitions and sample fixtures for binary fingerprints, signature packs, and repro bundles


File Walkthrough

Relevant files
Enhancement
54 files
RuntimeAdapter.cs
Universal profile resolution and extender backend routing integration

src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs

  • Added profile variant resolution for universal profile support with
    confidence tracking
  • Implemented backend routing decision logic with capability-based
    fail-closed safety gates
  • Added runtime mode probing using symbol health to determine effective
    execution mode
  • Enhanced process selection scoring with host role, workshop matches,
    and module size metrics
  • Integrated extender backend execution path with capability report
    validation
+362/-21
MainViewModel.cs
Universal profile UI integration and variant validation   

src/SwfocTrainer.App/ViewModels/MainViewModel.cs

  • Added universal profile selection as default when available
  • Integrated profile variant resolution in attach flow with confidence
    display
  • Added save patch variant mismatch validation to prevent incompatible
    operations
  • Enhanced diagnostics display with resolved variant, host role, and
    selection score information
  • Added backend route suffix to action execution status messages
+144/-8 
NamedPipeExtenderBackend.cs
Named-pipe extender backend bridge implementation               

src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs

  • New service implementing named-pipe communication with extender bridge
    process
  • Provides capability probing and action execution routing to native
    extender
  • Includes automatic bridge host process startup with fallback
    resolution
  • Parses capability responses and maps them to backend capability
    contracts
+345/-0 
SdkOperationRouter.cs
SDK operation routing with capability validation                 

src/SwfocTrainer.Core/Services/SdkOperationRouter.cs

  • New router for SDK operation execution with feature flag gating
  • Validates operation mode compatibility and required capability
    contracts
  • Resolves profile variants and binary fingerprints for capability
    mapping
  • Merges context with resolved variant and capability metadata
+222/-0 
BackendRouter.cs
Backend routing with fail-closed mutation safety gates     

src/SwfocTrainer.Runtime/Services/BackendRouter.cs

  • New router implementing fail-closed safety policy for extender-routed
    mutations
  • Maps execution kinds to backend preferences with auto-fallback logic
  • Validates required capability contracts before allowing mutations
  • Provides detailed diagnostics for routing decisions and capability
    states
+191/-0 
CapabilityMapResolver.cs
Capability map resolution from binary fingerprints             

src/SwfocTrainer.Runtime/Services/CapabilityMapResolver.cs

  • New service resolving SDK operation capabilities from
    fingerprint-keyed maps
  • Validates required and optional anchor presence for operation
    availability
  • Computes confidence scores based on matched anchor requirements
  • Loads and parses JSON capability maps with schema validation
+210/-0 
ProfileVariantResolver.cs
Universal profile variant resolution service                         

src/SwfocTrainer.Runtime/Services/ProfileVariantResolver.cs

  • New service resolving universal profile to concrete variant based on
    process detection
  • Uses launch context recommendations with confidence scoring
  • Falls back to fingerprint-based default profile resolution
  • Provides exe-target-based fallback for unrecognized processes
+151/-0 
SdkCapabilityModels.cs
SDK capability and profile variant model definitions         

src/SwfocTrainer.Core/Models/SdkCapabilityModels.cs

  • New model definitions for SDK capability state and resolution
  • Defines SdkCapabilityStatus, CapabilityReasonCode, and anchor-related
    types
  • Includes BinaryFingerprint, CapabilityMap, and
    ProfileVariantResolution records
  • Defines SdkOperationRequest and SdkOperationResult envelopes
+132/-0 
RuntimeModeProbeResolver.cs
Runtime mode probing from symbol health indicators             

src/SwfocTrainer.Runtime/Services/RuntimeModeProbeResolver.cs

  • New service computing effective runtime mode from resolved symbol
    health
  • Counts tactical and galactic indicator symbols to disambiguate mode
  • Applies heuristics for ambiguous signals and fallback to hint mode
  • Returns probe result with reason code and signal counts
+128/-0 
BinaryFingerprintService.cs
Binary fingerprint capture and identification service       

src/SwfocTrainer.Runtime/Services/BinaryFingerprintService.cs

  • New service capturing binary fingerprints from module paths
  • Computes SHA256 hash and extracts version metadata
  • Loads module list from running process if available
  • Generates fingerprint ID from module name and hash prefix
+102/-0 
ProcessLocator.cs
Process locator host role and module size detection           

src/SwfocTrainer.Runtime/Services/ProcessLocator.cs

  • Added main module size capture from process metadata
  • Added host role determination based on process name and exe target
  • Enhanced metadata dictionary with host role, module size, and workshop
    match count
  • Updated process metadata construction with new selection scoring
    fields
+32/-4   
RuntimeReasonCode.cs
Runtime reason code taxonomy for diagnostics                         

src/SwfocTrainer.Core/Models/RuntimeReasonCode.cs

  • New enum defining stable reason-code taxonomy for attach, capability,
    mode, hook, helper, safety, and rollback operations
  • Provides diagnostic codes for process attachment, capability probing,
    and execution gating
  • Includes codes for helper routing, safety gates, and rollback outcomes
+47/-0   
SdkOperationCatalog.cs
SDK operation catalog and definition models                           

src/SwfocTrainer.Core/Models/SdkOperationCatalog.cs

  • Introduces SdkOperationCatalog static class defining canonical SDK
    operations for research runtime
  • Implements SdkOperationDefinition sealed record with operation
    metadata (id, mutation flag, allowed modes)
  • Provides factory methods ReadOnly() and Mutation() for operation
    definition creation
  • Includes mode validation logic via IsModeAllowed() method
+72/-0   
BackendRoutingModels.cs
Backend routing and capability models                                       

src/SwfocTrainer.Core/Models/BackendRoutingModels.cs

  • Introduces ExecutionBackendKind enum for backend identity (Extender,
    Helper, Memory, Save)
  • Defines CapabilityConfidenceState enum (Experimental, Verified)
  • Adds sealed records for capability and routing models:
    BackendCapability, CapabilityReport, BackendHealth,
    BackendRouteDecision, ExtenderCommand, ExtenderResult
  • Includes helper methods like IsFeatureAvailable() and Unknown()
    factory
+84/-0   
App.xaml.cs
SDK runtime service registration                                                 

src/SwfocTrainer.App/App.xaml.cs

  • Registers new SDK runtime services in dependency injection container
  • Adds IBinaryFingerprintService, ICapabilityMapResolver,
    ISdkExecutionGuard registrations
  • Registers routing services: ISdkOperationRouter, IBackendRouter,
    IExecutionBackend
  • Configures capability maps root path and adds RuntimeModeProbeResolver
    singleton
+15/-0   
FileSystemProfileRepository.cs
Profile inheritance merging enhancements                                 

src/SwfocTrainer.Profiles/Services/FileSystemProfileRepository.cs

  • Adds merging logic for RequiredCapabilities and ExperimentalFeatures
    arrays in profile inheritance
  • Merges BackendPreference and HostPreference fields with child override
    precedence
  • Filters out whitespace-only entries and applies case-insensitive
    deduplication
+15/-1   
ProfileValidator.cs
Profile backend and host preference validation                     

src/SwfocTrainer.Profiles/Validation/ProfileValidator.cs

  • Adds validation for backendPreference field (allowed values: auto,
    extender, helper, memory)
  • Adds validation for hostPreference field (allowed values:
    starwarsg_preferred, any)
  • Throws InvalidDataException with descriptive messages for invalid
    values
+20/-0   
SdkExecutionGuard.cs
SDK execution guard implementation                                             

src/SwfocTrainer.Runtime/Services/SdkExecutionGuard.cs

  • Implements ISdkExecutionGuard interface with fail-closed execution
    policy
  • Allows reads in degraded capability state; blocks writes unless
    capability available
  • Returns SdkExecutionDecision with reason codes and descriptive
    messages
+26/-0   
NoopSdkRuntimeAdapter.cs
No-op SDK runtime adapter stub                                                     

src/SwfocTrainer.Runtime/Services/NoopSdkRuntimeAdapter.cs

  • Implements ISdkRuntimeAdapter as no-op stub for research builds
  • Returns unavailable status with OperationNotMapped reason code
  • Includes operation and profile diagnostics in result
+22/-0   
ICapabilityMapResolver.cs
Capability map resolver contract                                                 

src/SwfocTrainer.Core/Contracts/ICapabilityMapResolver.cs

  • Defines interface for resolving operation capability state from
    fingerprint-mapped anchors
  • Provides ResolveAsync() method for capability resolution with anchor
    requirements
  • Includes ResolveDefaultProfileIdAsync() for fingerprint-based profile
    selection
+18/-0   
IExecutionBackend.cs
Execution backend contract                                                             

src/SwfocTrainer.Core/Contracts/IExecutionBackend.cs

  • Defines interface for execution backend implementations
  • Specifies methods for capability probing, action execution, and health
    checks
  • Includes BackendKind property for backend identification
+20/-0   
ProfileModels.cs
Profile model backend and capability fields                           

src/SwfocTrainer.Core/Models/ProfileModels.cs

  • Adds four new optional fields to TrainerProfile record:
    BackendPreference, RequiredCapabilities, HostPreference,
    ExperimentalFeatures
  • Sets default values: BackendPreference="auto",
    HostPreference="starwarsg_preferred"
  • Allows null for RequiredCapabilities and ExperimentalFeatures
+5/-1     
RuntimeModels.cs
Process metadata host and scoring fields                                 

src/SwfocTrainer.Core/Models/RuntimeModels.cs

  • Adds four new optional fields to ProcessMetadata record: HostRole,
    MainModuleSize, WorkshopMatchCount, SelectionScore
  • Provides default values for process host role classification and
    scoring metrics
+5/-1     
IProfileVariantResolver.cs
Profile variant resolver contract                                               

src/SwfocTrainer.Core/Contracts/IProfileVariantResolver.cs

  • Defines interface for resolving user-facing profile selections
    (including universal_auto)
  • Provides ResolveAsync() method with optional process list for variant
    resolution
+14/-0   
Enums.cs
Execution kind and process host role enums                             

src/SwfocTrainer.Core/Models/Enums.cs

  • Adds Sdk value to ExecutionKind enum
  • Introduces ProcessHostRole enum with values: Unknown, Launcher,
    GameHost
+9/-1     
NullSdkDiagnosticsSink.cs
Null SDK diagnostics sink implementation                                 

src/SwfocTrainer.Core/Services/NullSdkDiagnosticsSink.cs

  • Implements ISdkDiagnosticsSink as null-object pattern for SDK
    diagnostics
  • Provides no-op WriteAsync() method for structured diagnostics records
+12/-0   
IBinaryFingerprintService.cs
Binary fingerprint service contract                                           

src/SwfocTrainer.Core/Contracts/IBinaryFingerprintService.cs

  • Defines interface for capturing deterministic binary fingerprints
  • Provides CaptureFromPathAsync() method with optional process ID
    parameter
+11/-0   
ISdkOperationRouter.cs
SDK operation router contract                                                       

src/SwfocTrainer.Core/Contracts/ISdkOperationRouter.cs

  • Defines interface for routing SDK operations through capability
    resolution and guards
  • Provides ExecuteAsync() method for operation execution with
    cancellation support
+11/-0   
ISdkRuntimeAdapter.cs
SDK runtime adapter contract                                                         

src/SwfocTrainer.Core/Contracts/ISdkRuntimeAdapter.cs

  • Defines interface for SDK execution adapter for research-track
    in-process handling
  • Provides ExecuteAsync() method for operation execution
+11/-0   
ISdkDiagnosticsSink.cs
SDK diagnostics sink contract                                                       

src/SwfocTrainer.Core/Contracts/ISdkDiagnosticsSink.cs

  • Defines interface for structured SDK diagnostics record sink
  • Provides WriteAsync() method for diagnostics persistence
+11/-0   
ISdkExecutionGuard.cs
SDK execution guard contract                                                         

src/SwfocTrainer.Core/Contracts/ISdkExecutionGuard.cs

  • Defines interface for fail-closed SDK operation execution policy
  • Provides CanExecute() method for capability-based execution decisions
+11/-0   
IBackendRouter.cs
Backend router contract                                                                   

src/SwfocTrainer.Core/Contracts/IBackendRouter.cs

  • Defines interface for backend routing decisions
  • Provides Resolve() method for routing based on request, profile,
    process, and capability context
+12/-0   
collect-mod-repro-bundle.ps1
Repro bundle collection enhancements                                         

tools/collect-mod-repro-bundle.ps1

  • Adds process host role detection (game_host, launcher, unknown) and
    main module size capture
  • Implements selection score calculation based on workshop matches, host
    role, and module size
  • Adds Get-ProfileRequiredCapabilities() and Get-RuntimeEvidence()
    helper functions
  • Enhances repro bundle with backend route decision, capability probe
    snapshot, hook install report, and overlay state
  • Updates markdown output to include host process details and capability
    information
+170/-12
build-native.ps1
Native Windows build orchestration script                               

tools/native/build-native.ps1

  • Implements Windows native build orchestration with CMake generator
    resolution
  • Supports Visual Studio 18 2026, 17 2022, 16 2019 with automatic
    detection
  • Includes fallback to Ninja generator and WSL build path
  • Handles CMake cache invalidation on generator changes
  • Discovers SwfocExtender.Host.exe artifact with recursive fallback
+217/-0 
resolve-cmake.ps1
CMake and Visual Studio resolver script                                   

tools/native/resolve-cmake.ps1

  • Resolves CMake executable from PATH and well-known installation
    locations
  • Detects Visual Studio instances via vswhere with fallback to disk
    scanning
  • Maps Visual Studio product line versions to CMake generators
  • Outputs JSON or human-readable format with generator recommendations
+208/-0 
run-live-validation.ps1
Live validation hard gate and TRX polling                               

tools/run-live-validation.ps1

  • Adds environment variable propagation for SWFOC_LIVE_OUTPUT_DIR and
    SWFOC_LIVE_TEST_NAME
  • Implements TRX file polling with 120-second deadline and retry logic
  • Adds -RequireNonBlockedClassification hard gate for blocking
    classifications
  • Extracts and reports repro bundle classification in summary output
+88/-9   
generate-signature-candidates.ps1
Signature candidate generation from fingerprint                   

tools/research/generate-signature-candidates.ps1

  • Generates signature pack from binary fingerprint and profile
    definitions
  • Maps profile signatures to anchor definitions with required/optional
    flags
  • Defines operation anchor requirements for SDK operations (spawn, kill,
    set_owner, etc.)
  • Outputs normalized JSON with schema version and fingerprint
    correlation
+80/-0   
capture-binary-fingerprint.ps1
Binary fingerprint capture script                                               

tools/research/capture-binary-fingerprint.ps1

  • Captures deterministic binary fingerprints with SHA256 hash and
    version metadata
  • Generates fingerprint ID from module stem and hash prefix
  • Optionally captures module list from running process
  • Outputs JSON with schema version and timestamp
+64/-0   
validate-signature-pack.ps1
Signature pack validation script                                                 

tools/validate-signature-pack.ps1

  • Validates signature pack JSON against schema with required field
    checks
  • Implements strict mode validation for schema version, anchor
    definitions, and operations
  • Validates operation anchor arrays and required field presence
+62/-0   
run-capability-intel.ps1
Capability intelligence workflow orchestration                     

tools/research/run-capability-intel.ps1

  • Orchestrates capability intelligence workflow: fingerprint capture,
    signature generation, normalization
  • Generates analysis notes markdown with run metadata and analyst
    guidance
  • Outputs fingerprint, signature pack, and notes to research directory
+59/-0   
validate-binary-fingerprint.ps1
Binary fingerprint validation script                                         

tools/validate-binary-fingerprint.ps1

  • Validates binary fingerprint JSON against schema with required field
    checks
  • Implements strict mode validation for schema version, SHA256 format,
    and module list
+45/-0   
normalize-signature-pack.ps1
Signature pack normalization script                                           

tools/research/normalize-signature-pack.ps1

  • Normalizes signature pack JSON with sorted anchors and operations
  • Deduplicates and sorts anchor and operation lists for deterministic
    output
  • Preserves schema version and metadata fields
+43/-0   
BridgeHostMain.cpp
Native extender bridge host main entry point implementation

native/SwfocExtender.Bridge/src/BridgeHostMain.cpp

  • Implements named-pipe bridge host entry point for extender
    communication
  • Handles health, probe_capabilities, and set_credits commands with JSON
    payload parsing
  • Integrates EconomyPlugin for credits execution with lock/one-shot
    semantics
  • Provides graceful shutdown via Windows console control handler
+264/-0 
HookLifecycleManager.hpp
Hook lifecycle state management contract definition           

native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp

  • Defines hook state tracking enum (NotInstalled, Installed, Failed,
    RolledBack)
  • Provides HookRecord struct with state and reason code
  • Declares lifecycle management interface for hook installation tracking
+33/-0   
HookLifecycleManager.cpp
Hook lifecycle manager implementation                                       

native/SwfocExtender.Core/src/HookLifecycleManager.cpp

  • Implements hook state transitions (markInstalled, markFailed,
    markRolledBack)
  • Provides hook record lookup by ID with default fallback
+26/-0   
CapabilityProbe.hpp
Capability probe state model contract definition                 

native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp

  • Defines capability state enum (Unknown, Experimental, Verified)
  • Declares CapabilityEntry struct with availability and reason code
  • Provides capability probe interface for feature availability tracking
+32/-0   
CapabilityProbe.cpp
Capability probe implementation                                                   

native/SwfocExtender.Core/src/CapabilityProbe.cpp

  • Implements capability marking and availability checking
  • Provides immutable snapshot of all tracked capabilities
+22/-0   
NamedPipeBridgeServer.hpp
Named-pipe bridge server contract definition                         

native/SwfocExtender.Bridge/include/swfoc_extender/bridge/NamedPipeBridgeServer.hpp

  • Defines BridgeCommand struct with feature ID, profile ID, and JSON
    payload
  • Defines BridgeResult struct with success status, reason code, and
    diagnostics
  • Declares named-pipe server interface with handler callback pattern
+51/-0   
PluginContracts.hpp
Plugin execution contract definitions                                       

native/SwfocExtender.Plugins/include/swfoc_extender/plugins/PluginContracts.hpp

  • Defines PluginRequest struct for feature execution with int/lock
    values
  • Defines CapabilitySnapshot struct for capability state reporting
  • Defines PluginResult struct with diagnostics map
  • Declares IPlugin interface for plugin implementations
+37/-0   
EconomyPlugin.hpp
Economy plugin interface declaration                                         

native/SwfocExtender.Plugins/include/swfoc_extender/plugins/EconomyPlugin.hpp

  • Declares EconomyPlugin class implementing IPlugin interface
  • Provides execute() for set_credits operations with atomic state
    tracking
  • Provides capabilitySnapshot() for credits availability reporting
+26/-0   
EconomyPlugin.cpp
Economy plugin set_credits implementation                               

native/SwfocExtender.Plugins/src/EconomyPlugin.cpp

  • Implements set_credits execution with validation and lock/one-shot
    semantics
  • Tracks hook installation state and locked credits value atomically
  • Returns capability snapshot with Verified or Experimental state
+62/-0   
OverlayState.hpp
Overlay state model contract definition                                   

native/SwfocExtender.Overlay/include/swfoc_extender/overlay/OverlayState.hpp

  • Declares overlay visibility state management interface
+14/-0   
OverlayState.cpp
Overlay state implementation                                                         

native/SwfocExtender.Overlay/src/OverlayState.cpp

  • Implements overlay visibility getter/setter
+13/-0   
universal_auto.json
Universal auto profile for variant resolution                       

profiles/default/profiles/universal_auto.json

  • Introduces user-facing universal auto profile with extender backend
    preference
  • Defines comprehensive required capabilities across all profiles
  • Enables variant resolution feature flag
  • Inherits from base_swfoc with empty signature sets and actions
+34/-0   
Tests
12 files
BackendRouterTests.cs
Backend router decision logic test coverage                           

tests/SwfocTrainer.Tests/Runtime/BackendRouterTests.cs

  • New test suite validating backend routing decisions for extender vs
    legacy paths
  • Tests fail-closed safety gates for mutations with unknown capabilities
  • Validates capability contract enforcement and fallback routing logic
  • Covers auto-preference routing with capability availability scenarios
+192/-0 
NamedPipeExtenderBackendTests.cs
Named-pipe extender backend communication tests                   

tests/SwfocTrainer.Tests/Runtime/NamedPipeExtenderBackendTests.cs

  • New test suite for named-pipe extender backend communication
  • Tests capability probing with mock bridge server responses
  • Validates action execution payload passing and hook state tracking
  • Covers error handling for unavailable bridge scenarios
+171/-0 
SdkOperationRouterTests.cs
SDK operation router feature flag and mode validation tests

tests/SwfocTrainer.Tests/Core/SdkOperationRouterTests.cs

  • New test suite for SDK operation router with feature flag and mode
    validation
  • Tests feature flag gating and runtime attachment requirements
  • Validates mode mismatch blocking for tactical-only operations
  • Includes fake implementations for SDK runtime adapter and capability
    resolution
+169/-0 
CapabilityMapResolverTests.cs
Capability map resolver anchor validation tests                   

tests/SwfocTrainer.Tests/Runtime/CapabilityMapResolverTests.cs

  • New test suite for capability map resolution from fingerprints
  • Tests available state when required anchors present
  • Validates degraded state for missing optional anchors
  • Covers fail-closed behavior for missing fingerprint maps
+149/-0 
ProfileVariantResolverTests.cs
Profile variant resolver detection and fallback tests       

tests/SwfocTrainer.Tests/Runtime/ProfileVariantResolverTests.cs

  • New test suite for universal profile variant resolution
  • Tests explicit profile passthrough for non-universal profiles
  • Validates workshop marker detection for ROE profile selection
  • Covers fallback to exe-target-based profiles and safe defaults
+86/-0   
LiveRoeRuntimeHealthTests.cs
Live ROE runtime health test evidence capture                       

tests/SwfocTrainer.Tests/Profiles/LiveRoeRuntimeHealthTests.cs

  • Added credits value normalization to prevent negative values in live
    test
  • Added evidence capture to JSON output for backend route and hook state
    diagnostics
  • Enhanced logging with readback values and normalized request values
+50/-1   
RuntimeModeProbeResolverTests.cs
Runtime mode probe resolver symbol-based detection tests 

tests/SwfocTrainer.Tests/Runtime/RuntimeModeProbeResolverTests.cs

  • New test suite for runtime mode probing from symbol health
  • Tests tactical mode detection from tactical indicator symbols
  • Tests galactic mode detection from galactic indicator symbols
  • Covers ambiguous signal handling and hint-based fallback
+87/-0   
ProfileValidatorTests.cs
Profile validator unit tests                                                         

tests/SwfocTrainer.Tests/Profiles/ProfileValidatorTests.cs

  • Adds unit tests for ProfileValidator.Validate() method
  • Tests validation of backendPreference and hostPreference fields with
    invalid values
  • Tests successful validation for extender profile contract with
    required capabilities
  • Provides comprehensive BuildProfile() helper for test profile
    construction
+76/-0   
SdkExecutionGuardTests.cs
SDK execution guard unit tests                                                     

tests/SwfocTrainer.Tests/Runtime/SdkExecutionGuardTests.cs

  • Tests SdkExecutionGuard.CanExecute() method for capability-based
    execution decisions
  • Validates read operations allowed in degraded capability state
  • Validates write operations blocked when capability unavailable
  • Uses CapabilityResolutionResult test fixtures
+49/-0   
BinaryFingerprintServiceTests.cs
Binary fingerprint service unit tests                                       

tests/SwfocTrainer.Tests/Runtime/BinaryFingerprintServiceTests.cs

  • Tests BinaryFingerprintService.CaptureFromPathAsync() for stable
    fingerprint generation
  • Validates fingerprint properties: source path, module name, SHA256
    hash, fingerprint ID
  • Uses temporary file for testing with proper cleanup
+34/-0   
ProfileInheritanceTests.cs
Profile inheritance test enhancements                                       

tests/SwfocTrainer.Tests/Profiles/ProfileInheritanceTests.cs

  • Adds assertions for inherited profile properties: HostPreference,
    BackendPreference, RequiredCapabilities
  • Updates test name from "All Four Target Profiles" to "All Target
    Profiles"
  • Adds new universal_auto profile to expected manifest list
+6/-2     
ProfileActionCatalogTests.cs
SDK routing validation for credits action                               

tests/SwfocTrainer.Tests/Profiles/ProfileActionCatalogTests.cs

  • Adds test SwfocProfiles_Should_Route_SetCredits_Via_Sdk() validating
    SDK routing for credits writes
  • Verifies set_credits action uses ExecutionKind.Sdk for SWFOC profiles
  • Tests all three SWFOC profiles: base_swfoc, aotr_1397421866_swfoc,
    roe_3447786229_swfoc
+20/-0   
Configuration changes
18 files
CMakeLists.txt
Native extender CMake root configuration                                 

native/CMakeLists.txt

  • Configures C++20 standard for native extender project
  • Defines subdirectory build order for Core, Bridge, Overlay, and
    Plugins
+11/-0   
CMakeLists.txt
Core library CMake configuration                                                 

native/SwfocExtender.Core/CMakeLists.txt

  • Configures Core library with CapabilityProbe and HookLifecycleManager
    sources
+7/-0     
CMakeLists.txt
Bridge library and host executable CMake configuration     

native/SwfocExtender.Bridge/CMakeLists.txt

  • Configures Bridge library and SwfocExtender.Host executable
  • Links Host to Bridge and Plugins libraries
+23/-0   
CMakeLists.txt
Plugins library CMake configuration                                           

native/SwfocExtender.Plugins/CMakeLists.txt

  • Configures Plugins library with EconomyPlugin source
+10/-0   
CMakeLists.txt
Overlay library CMake configuration                                           

native/SwfocExtender.Overlay/CMakeLists.txt

  • Configures Overlay library with OverlayState source
+10/-0   
bootstrap-wsl-toolchain.sh
WSL native toolchain bootstrap script                                       

tools/native/bootstrap-wsl-toolchain.sh

  • Bootstraps portable CMake and Ninja toolchain for WSL environments
  • Falls back to local installation under tools/native/.local if sudo
    unavailable
  • Downloads and extracts prebuilt binaries from official GitHub releases
+69/-0   
build-native.sh
WSL native build script                                                                   

tools/native/build-native.sh

  • Builds native extender using CMake and Ninja on WSL
  • Adds local toolchain to PATH if available
  • Validates cmake and ninja availability before build
+26/-0   
repro-bundle.schema.json
Repro bundle schema updates for backend routing and host selection

tools/schemas/repro-bundle.schema.json

  • Adds selectedHostProcess schema with host ranking fields
  • Adds backendRouteDecision schema with backend and reason code
  • Adds capabilityProbeSnapshot schema with capability count and required
    capabilities
  • Adds hookInstallReport schema with state and reason code
  • Adds overlayState schema with availability and visibility flags
+58/-0   
repro_bundle_sample.json
Repro bundle sample fixture with vNext sections                   

tools/fixtures/repro_bundle_sample.json

  • Adds sample selectedHostProcess with host ranking fields
  • Adds sample backendRouteDecision with memory fallback route
  • Adds sample capabilityProbeSnapshot with extender backend
  • Adds sample hookInstallReport with unknown state
  • Adds sample overlayState with unavailable status
+32/-0   
binary-fingerprint.schema.json
Binary fingerprint schema definition                                         

tools/schemas/binary-fingerprint.schema.json

  • Defines binary fingerprint schema with SHA256, module name, and
    timestamp
  • Includes module list and source path for provenance tracking
+34/-0   
signature-pack.schema.json
Signature pack schema definition                                                 

tools/schemas/signature-pack.schema.json

  • Defines signature pack schema with anchors and operation requirements
  • Specifies required/optional anchor dependencies per operation
+57/-0   
binary_fingerprint_sample.json
Binary fingerprint sample fixture                                               

tools/fixtures/binary_fingerprint_sample.json

  • Provides sample binary fingerprint with SHA256 and module metadata
+17/-0   
signature_pack_sample.json
Signature pack sample fixture                                                       

tools/fixtures/signature_pack_sample.json

  • Provides sample signature pack with anchor definitions and operation
    mappings
+66/-0   
base_sweaw.json
Base SWEAW profile backend routing configuration                 

profiles/default/profiles/base_sweaw.json

  • Adds backendPreference, requiredCapabilities, hostPreference,
    experimentalFeatures fields
+4/-0     
base_swfoc.json
Base SWFOC profile extender routing and SDK execution configuration

profiles/default/profiles/base_swfoc.json

  • Adds backend routing fields with auto preference and
    starwarsg_preferred host selection
  • Adds set_credits to required capabilities
  • Changes set_credits execution kind from Memory to Sdk
  • Adds experimental features list
+13/-2   
aotr_1397421866_swfoc.json
AOTR profile backend routing configuration                             

profiles/default/profiles/aotr_1397421866_swfoc.json

  • Adds backend routing fields with auto preference and
    starwarsg_preferred host selection
  • Adds required capabilities for credits and hero state helper
  • Adds experimental features list
+10/-0   
roe_3447786229_swfoc.json
ROE profile backend routing configuration                               

profiles/default/profiles/roe_3447786229_swfoc.json

  • Adds backend routing fields with auto preference and
    starwarsg_preferred host selection
  • Adds required capabilities for credits and ROE-specific helpers
  • Adds experimental features list
+11/-0   
manifest.json
Profile manifest update for universal auto profile             

profiles/default/manifest.json

  • Adds universal_auto profile entry to manifest with R&D version tag
+8/-0     
Documentation
11 files
README.md
Native extender documentation and build guide                       

native/README.md

  • Documents native extender architecture (Core, Bridge, Overlay, Plugins
    layers)
  • Provides CMake build instructions for Windows
  • Describes SwfocExtender.Host executable and named-pipe interface
  • Documents supported commands and environment variable override
+31/-0   
EXTERNAL_TOOLS_SETUP.md
Native extender toolchain setup documentation                       

docs/EXTERNAL_TOOLS_SETUP.md

  • Documents native extender toolchain setup for WSL and Windows paths
  • Provides bootstrap and build commands for both platforms
  • Documents cmake resolution via resolve-cmake.ps1 for Windows
  • Describes Visual Studio generator auto-selection and fallback behavior
+59/-0   
ARCHITECTURE.md
Architecture documentation for tiered backend routing and extender
path

docs/ARCHITECTURE.md

  • Adds tiered backend routing section describing fail-closed decision
    tree
  • Documents extender capability probing and route resolution by priority
  • Updates credits system to describe extender path via ExecutionKind.Sdk
  • Adds native extender skeleton to component list
  • Updates execution kind dispatch table with Sdk kind
+32/-5   
PROFILE_FORMAT.md
Profile format documentation for backend routing and capability maps

docs/PROFILE_FORMAT.md

  • Adds backendPreference, requiredCapabilities, hostPreference,
    experimentalFeatures fields
  • Documents set_credits execution via Sdk kind for FoC profiles
  • Adds runtime host selection diagnostics section with hostRole,
    mainModuleSize, workshopMatchCount, selectionScore
  • Documents universal auto profile resolution priority and diagnostics
    metadata
  • Adds SDK capability maps section with fingerprint-scoped contract
    storage
+60/-0   
TEST_PLAN.md
Test plan documentation for SDK and backend routing features

docs/TEST_PLAN.md

  • Adds unit test coverage for fingerprint service, capability maps,
    profile variants, runtime mode probes
  • Adds SDK execution guard, operation router, backend router, and
    extender backend tests
  • Adds profile validator tests for backend/host preference contracts
  • Documents binary fingerprint and signature pack schema validation
    commands
  • Updates manual runtime checks to include host ranking fields and
    backend route diagnostics
+39/-1   
LIVE_VALIDATION_RUNBOOK.md
Live validation runbook updates for extender bridge and ROE hard gate

docs/LIVE_VALIDATION_RUNBOOK.md

  • Adds precondition for extender bridge host running on named pipe
  • Documents final ROE hard gate validation with
    RequireNonBlockedClassification flag
  • Adds vNext bundle sections to expected outputs
  • Updates closure criteria to include host ranking diagnostics and
    extender credits path evidence
+25/-0   
source-corpus.md
Research source corpus documentation                                         

tools/research/source-corpus.md

  • Documents official sources (Petroglyph, SteamDB) for game build
    signals
  • Lists community validation sources with priority policy
+25/-0   
build-fingerprint.md
Build fingerprint contract documentation                                 

tools/research/build-fingerprint.md

  • Defines fingerprint ID naming convention and required fields
  • Documents output location and immutability contract
+26/-0   
RESEARCH_GAME_WORKFLOW.md
Game workflow research contract and safety rules                 

docs/RESEARCH_GAME_WORKFLOW.md

  • Defines reverse-engineering evidence contract for R&D track
  • Establishes scope, mandatory artifacts, and provenance requirements
  • Documents safety rules for capability uncertainty and fail-closed
    behavior
+33/-0   
README.md
Capability maps directory documentation                                   

profiles/default/sdk/maps/README.md

  • Documents capability maps keyed by fingerprint ID
  • Specifies filename convention and promotion workflow
+8/-0     
README.md
README updates for universal auto and research tooling     

README.md

  • Adds universal_auto to supported profiles list
  • Documents R&D research tooling commands for capability intel and
    fingerprinting
  • Adds research contracts and schema documentation references
+18/-0   
Additional files
16 files
ci.yml +10/-0   
20260218-213838-live-credits.trx +142/-0 
20260218-213838-live-hero-helper.trx +39/-0   
20260218-213838-live-roe-health.trx +53/-0   
20260218-213838-live-tactical.trx +39/-0   
issue-19-evidence-template.md +22/-0   
issue-34-evidence-template.md +30/-0   
launch-context-fixture.json +6/-0     
live-roe-runtime-evidence.json +34/-0   
live-validation-summary.json +46/-0   
repro-bundle.json +128/-0 
repro-bundle.md +35/-0   
NamedPipeBridgeServer.cpp +282/-0 
starwarsg_d3e0b6d3a9a31f42.json +48/-0   
validate-policy-contracts.ps1 +15/-1   
validate-repro-bundle.ps1 +21/-0   

Summary by CodeRabbit

  • New Features

    • Universal "auto" profile that auto-resolves to concrete runtime variants; SDK operation path and tiered backend routing (Extender/Helper/Memory) including routed credits writes and safer fail-closed behavior.
    • Local native extender bridge (named-pipe) with native build tooling and runtime bridge host.
  • Documentation

    • Expanded architecture, profile format, live-validation runbook, R&D workflow/tooling, schemas and tooling guides.
  • Tests

    • Added unit and live validation coverage for fingerprinting, capability maps, routing, variant resolution, SDK guard, and extender integration.
  • Chores

    • Updated ignore rules, CI schema smoke validations, and added validation/utility scripts and fixtures.

@devloai
Copy link

devloai bot commented Feb 18, 2026

Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔
Please upgrade your plan or buy additional credits from the subscription page.

@github-actions github-actions bot added area:runtime Runtime attach, memory actions, symbol resolution area:app WPF app/UI/ViewModel area:profiles Profile metadata, inheritance, manifest area:ci CI/CD and automation area:tooling Scripts and developer tooling area:docs Documentation/process related work. needs-reviewer No eligible non-author reviewer available labels Feb 18, 2026
@github-actions
Copy link

Reviewer automation could not request a non-author reviewer for this PR.

  • PR author: @Prekzursil
  • configured users: Prekzursil
  • configured teams:
  • fallback label applied: needs-reviewer

Next steps:

  1. Add at least one non-author collaborator/team to config/reviewer-roster.json.
  2. Re-run reviewer automation (or request review manually).

@coderabbitai
Copy link

coderabbitai bot commented Feb 18, 2026

Warning

Rate limit exceeded

@Prekzursil has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 11 minutes and 43 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Adds R&D tooling, JSON schemas and validators, SDK capability and routing models, a native named‑pipe extender skeleton (C++ bridge + plugins), runtime backend routing and SDK operation services, profile variant resolution, many tests and live run artifacts, CI schema‑smoke steps, and documentation/tooling for research/repro bundles.

Changes

Cohort / File(s) Summary
CI & Ignore
​.github/workflows/ci.yml, ​.gitignore
Adds PowerShell schema‑smoke CI steps (binary fingerprint & signature pack) — duplicated placement in job — and new ignore patterns for native build/runtime artifacts.
Docs & Runbooks
README.md, TODO.md, docs/...
Adds R&D tooling docs, architecture vNext, live validation runbook updates, profile/schema docs, research workflow, and test plan changes describing vNext artifacts and gates.
Native Extender (C++)
native/CMakeLists.txt, native/.../SwfocExtender.*/*
Adds native extender skeleton (Core, Bridge, Overlay, Plugins), CMake targets, NamedPipeBridgeServer, BridgeHostMain, EconomyPlugin, and overlay/core components.
Schemas, Fixtures & Validators
tools/schemas/*, tools/fixtures/*, tools/validate-*.ps1
Adds binary‑fingerprint and signature‑pack JSON schemas, bumps repro‑bundle schema to 1.1, adds sample fixtures and validation scripts.
Research Tooling & Scripts
tools/research/*, tools/native/*, tools/collect-mod-repro-bundle.ps1, tools/run-live-validation.ps1
Adds fingerprint/signature capture/generation/normalize scripts, native build/bootstrap scripts, repro bundle collection enhancements, live validation gating flag and artifact resolution.
Core Models & Contracts
src/SwfocTrainer.Core/Models/*, src/SwfocTrainer.Core/Contracts/*
Introduces backend routing models, runtime reason codes, SDK capability/fingerprint models, ProcessHostRole, and many new public interfaces (IBinaryFingerprintService, ICapabilityMapResolver, IProfileVariantResolver, IExecutionBackend, ISdkOperationRouter, ISdkRuntimeAdapter, IBackendRouter, ISdkExecutionGuard, ISdkDiagnosticsSink).
Runtime Services & Routing
src/SwfocTrainer.Runtime/Services/*
Implements BackendRouter, BinaryFingerprintService, CapabilityMapResolver, ProfileVariantResolver, RuntimeModeProbeResolver, SdkOperationRouter, SdkExecutionGuard, NamedPipeExtenderBackend, Noop SDK adapter; integrates routing/diagnostics into RuntimeAdapter and process metadata.
App, DI & Profiles
src/SwfocTrainer.App/*, src/SwfocTrainer.Profiles/*, profiles/default/*
Registers new services in DI; MainViewModel gains universal profile handling, variant resolution, backend diagnostics and save gating; profile merge/validation updated and new universal_auto profile added.
Tests & Live Results
tests/*, TestResults/runs/20260218-213838/*
Adds extensive unit tests for new services (router, resolver, fingerprint, capability maps, extender backend, SDK guard/router), profile tests, and new live TRX/repro-bundle artifacts capturing a run.
Tooling: native build & artifacts
tools/native/*, tools/collect-mod-repro-bundle.ps1
Adds WSL/Windows native build scripts, cmake resolver, bootstrap helpers, and extended repro-bundle fields (host selection, backend route, capability probe, hook/overlay state).

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant MainViewModel
    participant ProfileVariantResolver
    participant BinaryFingerprintService
    participant CapabilityMapResolver
    participant ProfileRepository

    Client->>MainViewModel: AttachAsync(requestedProfileId)
    MainViewModel->>ProfileVariantResolver: ResolveAsync(requestedProfileId, processes)
    alt requestedProfileId == universal_auto
        ProfileVariantResolver->>BinaryFingerprintService: CaptureFromPathAsync(modulePath)
        BinaryFingerprintService-->>ProfileVariantResolver: BinaryFingerprint
        ProfileVariantResolver->>CapabilityMapResolver: ResolveDefaultProfileIdAsync(fingerprint)
        CapabilityMapResolver-->>ProfileVariantResolver: defaultProfileId
        ProfileVariantResolver->>ProfileRepository: GetProfileAsync(defaultProfileId)
        ProfileRepository-->>ProfileVariantResolver: ConcreteProfile
    else
        ProfileVariantResolver->>ProfileRepository: GetProfileAsync(requestedProfileId)
        ProfileRepository-->>ProfileVariantResolver: ConcreteProfile
    end
    ProfileVariantResolver-->>MainViewModel: ProfileVariantResolution
    MainViewModel-->>Client: AttachComplete(effectiveProfile)
Loading
sequenceDiagram
    participant Caller
    participant SdkOperationRouter
    participant ProfileVariantResolver
    participant CapabilityMapResolver
    participant SdkExecutionGuard
    participant RuntimeAdapter
    participant Extender

    Caller->>SdkOperationRouter: ExecuteAsync(SdkOperationRequest)
    SdkOperationRouter->>ProfileVariantResolver: ResolveAsync(profileId,...)
    ProfileVariantResolver-->>SdkOperationRouter: ProfileVariantResolution
    SdkOperationRouter->>CapabilityMapResolver: ResolveAsync(fingerprint, operationId, anchors)
    CapabilityMapResolver-->>SdkOperationRouter: CapabilityResolutionResult
    SdkOperationRouter->>SdkExecutionGuard: CanExecute(resolution, isMutation)
    alt Allowed
        SdkOperationRouter->>RuntimeAdapter: ExecuteAsync(routedRequest)
        RuntimeAdapter->>Extender: SendCommand (NamedPipe)
        Extender-->>RuntimeAdapter: ExtenderResult
        RuntimeAdapter-->>SdkOperationRouter: ActionExecutionResult
        SdkOperationRouter-->>Caller: SdkOperationResult(success)
    else Blocked
        SdkOperationRouter-->>Caller: SdkOperationResult(failed, reason)
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Poem

🐇 I hop through manifests, schemas bright,
Fingerprints sealed by moonlit night,
Pipes that whisper plugin cheers,
Diagnostics bloom — no fearful fears,
Hooray — the universal profile takes flight!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 1.47% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'SWFOC: proven-core extender reconciliation + ROE hard gate' clearly and specifically describes the main change: reconciling the extender core and adding ROE validation.
Description check ✅ Passed The PR description includes all required sections: Summary (what changed and why), Risk level, Affected Profiles, Reliability Evidence with deterministic tests and repro bundle paths, Validation commands, and proper formatting per template.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch rd/deep-re-universal-sdk

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
Untrusted process launch

Description: The extender host process path can be sourced from the SWFOC_EXTENDER_HOST_PATH
environment variable and then executed via Process.Start, which could enable local
arbitrary code execution if an attacker can influence that environment variable or drop a
malicious SwfocExtender.Host.exe in a probed directory.
NamedPipeExtenderBackend.cs [224-275]

Referred Code
private static bool TryStartBridgeHostProcess()
{
    lock (HostSync)
    {
        if (_bridgeHostProcess is not null && !_bridgeHostProcess.HasExited)
        {
            return true;
        }

        var hostPath = ResolveBridgeHostPath();
        if (hostPath is null)
        {
            return false;
        }

        try
        {
            _bridgeHostProcess = Process.Start(new ProcessStartInfo
            {
                FileName = hostPath,
                WorkingDirectory = Path.GetDirectoryName(hostPath) ?? AppContext.BaseDirectory,



 ... (clipped 31 lines)
Named pipe spoofing

Description: The code connects to a fixed named pipe (SwfocExtenderBridge) without authenticating the
server, allowing a local attacker to spoof the pipe server and receive/alter extender
commands and responses (e.g., falsely reporting capabilities or returning crafted
diagnostics).
NamedPipeExtenderBackend.cs [156-222]

Referred Code
private static async Task<ExtenderResult> SendCoreAsync(ExtenderCommand command, CancellationToken cancellationToken)
{
    try
    {
        using var timeoutCts = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken);
        timeoutCts.CancelAfter(DefaultResponseTimeoutMs + DefaultConnectTimeoutMs + 100);

        using var client = new NamedPipeClientStream(
            serverName: ".",
            pipeName: DefaultPipeName,
            direction: PipeDirection.InOut,
            options: PipeOptions.Asynchronous);

        await client.ConnectAsync(DefaultConnectTimeoutMs, timeoutCts.Token);

        using var writer = new StreamWriter(client, new UTF8Encoding(false), leaveOpen: true)
        {
            AutoFlush = true
        };
        using var reader = new StreamReader(client, Encoding.UTF8, leaveOpen: true);




 ... (clipped 46 lines)
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
No custom compliance provided

Follow the guide to enable custom compliance check.

Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Correctly merge capabilities in probe

In ProbeCapabilitiesAsync, merge inferred capabilities into the existing
report's capabilities dictionary instead of replacing it to preserve diagnostic
information.

src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs [882-908]

 private async Task<CapabilityReport> ProbeCapabilitiesAsync(TrainerProfile profile, CancellationToken cancellationToken)
 {
     if (_extenderBackend is null || CurrentSession is null)
     {
         return CapabilityReport.Unknown(profile.Id, RuntimeReasonCode.CAPABILITY_BACKEND_UNAVAILABLE);
     }
 
     var report = await _extenderBackend.ProbeCapabilitiesAsync(profile.Id, CurrentSession.Process, cancellationToken);
-    if (report.Capabilities.Count > 0)
+    
+    var finalCapabilities = new Dictionary<string, BackendCapability>(report.Capabilities, StringComparer.OrdinalIgnoreCase);
+
+    // Expose SDK execution catalog IDs as unknown capabilities so router diagnostics are explicit.
+    foreach (var featureId in profile.RequiredCapabilities ?? Array.Empty<string>())
     {
-        return report;
+        if (!finalCapabilities.ContainsKey(featureId))
+        {
+            finalCapabilities[featureId] = new BackendCapability(
+                featureId,
+                Available: false,
+                Confidence: CapabilityConfidenceState.Unknown,
+                ReasonCode: RuntimeReasonCode.CAPABILITY_UNKNOWN,
+                Notes: "Capability not confirmed by extender probe.");
+        }
     }
 
-    // Expose SDK execution catalog IDs as unknown capabilities so router diagnostics are explicit.
-    var inferredCapabilities = new Dictionary<string, BackendCapability>(StringComparer.OrdinalIgnoreCase);
-    foreach (var featureId in profile.RequiredCapabilities ?? Array.Empty<string>())
-    {
-        inferredCapabilities[featureId] = new BackendCapability(
-            featureId,
-            Available: false,
-            Confidence: CapabilityConfidenceState.Unknown,
-            ReasonCode: RuntimeReasonCode.CAPABILITY_UNKNOWN,
-            Notes: "Capability not confirmed by extender probe.");
-    }
-
-    return report with { Capabilities = inferredCapabilities };
+    return report with { Capabilities = finalCapabilities };
 }

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that report with { Capabilities = inferredCapabilities } discards other properties from the original report if it's not empty. The fix correctly preserves the original report's data while adding new capabilities, improving the robustness of diagnostic data handling.

Medium
Reflect actual hook availability

Update capabilitySnapshot to set creditsAvailable based on the installed flag to
accurately reflect whether the hook is installed.

native/SwfocExtender.Plugins/src/EconomyPlugin.cpp [53-60]

 CapabilitySnapshot EconomyPlugin::capabilitySnapshot() const {
     const auto installed = hookInstalled_.load();
     return CapabilitySnapshot{
-        .creditsAvailable = true,
+        .creditsAvailable = installed,
         .creditsState = installed ? "Verified" : "Experimental",
         .reasonCode = installed ? "CAPABILITY_PROBE_PASS" : "CAPABILITY_FEATURE_EXPERIMENTAL"
     };
 }
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: This is a good suggestion that corrects the logic to accurately report capability availability based on the hook's installation status, improving the correctness of the capability snapshot.

Medium
Use unique filenames for evidence

To prevent overwriting diagnostic data, generate a unique filename for each
live-roe-runtime-evidence.json file by appending a timestamp.

tests/SwfocTrainer.Tests/Profiles/LiveRoeRuntimeHealthTests.cs [176-177]

-var path = Path.Combine(outputDir, "live-roe-runtime-evidence.json");
+var fileName = $"live-roe-runtime-evidence-{DateTimeOffset.UtcNow.ToUnixTimeMilliseconds()}.json";
+var path = Path.Combine(outputDir, fileName);
 File.WriteAllText(path, JsonSerializer.Serialize(payload, new JsonSerializerOptions { WriteIndented = true }));

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies that a static filename in a test helper can cause file overwrites, leading to flaky tests and lost diagnostics, and proposes a valid fix to ensure unique filenames.

Low
Improve JSON object extraction to handle whitespace

Update ExtractObjectJson to handle whitespace between the colon separator and
the opening brace of a JSON object to prevent parsing failures.

native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp [42-72]

 std::string ExtractObjectJson(const std::string& json, const std::string& key) {
     const auto quotedKey = "\"" + key + "\"";
     auto keyPos = json.find(quotedKey);
     if (keyPos == std::string::npos) {
         return "{}";
     }
 
     auto colonPos = json.find(':', keyPos + quotedKey.length());
     if (colonPos == std::string::npos) {
         return "{}";
     }
 
-    auto openBrace = json.find('{', colonPos + 1);
-    if (openBrace == std::string::npos) {
+    auto openBrace = json.find_first_not_of(" \t\r\n", colonPos + 1);
+    if (openBrace == std::string::npos || json[openBrace] != '{') {
         return "{}";
     }
 
     auto depth = 0;
     for (std::size_t i = openBrace; i < json.size(); ++i) {
         if (json[i] == '{') {
             ++depth;
         } else if (json[i] == '}') {
             --depth;
             if (depth == 0) {
                 return json.substr(openBrace, i - openBrace + 1);
             }
         }
     }
 
     return "{}";
 }
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies a bug in the manual JSON parser where it fails to handle whitespace, and the proposed code fixes this, making the parser more robust against valid JSON formatting variations.

Low
General
Improve bridge host startup logic

In SendAsync, attempt to start the bridge host process on any connection
failure, not just when the HookState is "unreachable", to improve robustness.

src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs [140-154]

 private static async Task<ExtenderResult> SendAsync(ExtenderCommand command, CancellationToken cancellationToken)
 {
     var firstAttempt = await SendCoreAsync(command, cancellationToken);
-    if (firstAttempt.Succeeded || firstAttempt.HookState != "unreachable")
+    if (firstAttempt.Succeeded)
     {
         return firstAttempt;
     }
 
+    // Attempt to start the host on any failure, as it might not be running.
     if (!TryStartBridgeHostProcess())
     {
         return firstAttempt;
     }
 
+    // Wait a moment for the process to initialize before retrying.
+    await Task.Delay(250, cancellationToken);
     return await SendCoreAsync(command, cancellationToken);
 }

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly points out that relying solely on the "unreachable" hook state is brittle. Broadening the condition to retry on any failure makes the connection logic more resilient to various failure scenarios, such as timeouts, thus improving the service's reliability.

Medium
Add try-catch around file write

Add a try-catch block around the File.WriteAllText call to handle potential I/O
errors and prevent the test from failing unexpectedly.

tests/SwfocTrainer.Tests/Profiles/LiveRoeRuntimeHealthTests.cs [176-177]

 var path = Path.Combine(outputDir, "live-roe-runtime-evidence.json");
-File.WriteAllText(path, JsonSerializer.Serialize(payload, new JsonSerializerOptions { WriteIndented = true }));
+try
+{
+    File.WriteAllText(path, JsonSerializer.Serialize(payload, new JsonSerializerOptions { WriteIndented = true }));
+}
+catch (Exception)
+{
+    // Swallow or log exception so test execution continues.
+}

[To ensure code accuracy, apply this suggestion manually]

Suggestion importance[1-10]: 5

__

Why: The suggestion correctly proposes adding error handling for a file I/O operation within a test, which can improve test robustness by preventing crashes due to external factors.

Low
  • More

@codacy-production
Copy link

codacy-production bot commented Feb 18, 2026

Codacy's Analysis Summary

0 new issue (≤ 0 issue)
0 new security issue
596 complexity
18 duplications

Review Pull Request in Codacy →

AI Reviewer available: add the codacy-review label to get contextual insights without leaving GitHub.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

matches = processes
.Where(IsStarWarsGProcess)
.ToArray();

P1 Badge Restrict StarWarsG fallback to FoC profile attaches

This fallback now runs for every profile when no direct ExeTarget match is found, so attaching a non-FoC profile (for example base_sweaw) can incorrectly bind to a running StarWarsG.exe FoC process instead of returning ATTACH_NO_PROCESS. That leads to resolving symbols and executing actions against the wrong game binary, which is a safety/accuracy regression. The fallback should be gated to FoC-specific attaches (e.g., profile.ExeTarget == ExeTarget.Swfoc).

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".


var attachedProfile = _attachedProfile
?? await _profileRepository.ResolveInheritedProfileAsync(request.ProfileId, cancellationToken);
var capabilityReport = await ProbeCapabilitiesAsync(attachedProfile, cancellationToken);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Skip unconditional extender probing before route selection

Every action execution now probes extender capabilities before routing, even for actions that will ultimately run on memory/helper/save backends. In the common environment where the named-pipe bridge is not running, this incurs repeated pipe connect timeouts before each action and makes non-extender operations stall noticeably. Probe extender capabilities only when extender is a possible route (or cache probe failures) to avoid this runtime latency regression.

Useful? React with 👍 / 👎.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 16

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
docs/LIVE_VALIDATION_RUNBOOK.md (1)

38-44: ⚠️ Potential issue | 🟡 Minor

Optional ROE scope run omits -Strict, creating inconsistent failure semantics vs. the hard gate.

Line 42's optional run can silently swallow conditions that the formal ROE hard gate (lines 29-36) would fail on. An analyst who runs the optional command expecting parity with the gate will get a different failure surface.

Consider annotating the optional runs with a comment that -Strict and -RequireNonBlockedClassification are intentionally omitted for quick diagnostics, so the asymmetry is explicit rather than accidental.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/LIVE_VALIDATION_RUNBOOK.md` around lines 38 - 44, The optional ROE scope
invocation of pwsh ./tools/run-live-validation.ps1 omits the -Strict and
-RequireNonBlockedClassification flags, causing different failure semantics vs.
the formal ROE hard gate; update the docs by adding a brief inline note next to
the optional ROE command (pwsh ./tools/run-live-validation.ps1 -NoBuild -Scope
ROE -EmitReproBundle $true) stating that -Strict and
-RequireNonBlockedClassification are intentionally omitted for quicker
diagnostics and that the hard gate uses those flags for strict checks, so users
understand the asymmetry.
tools/validate-policy-contracts.ps1 (1)

40-70: ⚠️ Potential issue | 🟠 Major

Add docs/LIVE_VALIDATION_RUNBOOK.md to the $requiredFiles array.

The file exists in the repo but is missing from the policy contract validator at lines 40-70. Since similar runbooks (docs/MOD_ONBOARDING_RUNBOOK.md, docs/RELEASE_RUNBOOK.md) are already in the contract, this file should be added to ensure future deletions are caught by CI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/validate-policy-contracts.ps1` around lines 40 - 70, The $requiredFiles
array in validate-policy-contracts.ps1 is missing
docs/LIVE_VALIDATION_RUNBOOK.md; update the $requiredFiles variable (the array
initializer named $requiredFiles) to include "docs/LIVE_VALIDATION_RUNBOOK.md"
alongside the other runbook entries (e.g., "docs/MOD_ONBOARDING_RUNBOOK.md",
"docs/RELEASE_RUNBOOK.md") so the policy contract validator will check for that
file.
profiles/default/profiles/base_swfoc.json (1)

77-88: ⚠️ Potential issue | 🟡 Minor

executionKind: "Sdk" is a hard-gate change: set_credits silently blocks for users without the native extender.

Previously Memory-backed, this action now routes exclusively through the SDK/Extender path. Per the architecture, if the extender is unavailable the route is SAFETY_MUTATION_BLOCKED (fail-closed). With backendPreference: "auto" and requiredCapabilities: ["set_credits"], there is no silent fallback to the old memory path. Confirm that:

  1. A user-visible error message (not just a reason code) is surfaced when the extender is absent.
  2. The upgrade/deployment notes for this profile change document the extender prerequisite.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@profiles/default/profiles/base_swfoc.json` around lines 77 - 88, The change
sets executionKind for the action "set_credits" to "Sdk", which will block users
if the native extender is missing; update the profile to surface a clear
user-visible error and document the extender prerequisite: modify the
"set_credits" action handling so that when executionKind == "Sdk" and the
extender is unavailable it returns a user-facing error message (not just
SAFETY_MUTATION_BLOCKED) and/or provide a fallback by adjusting
"backendPreference" (e.g., allow memory fallback) or adding a conditional route;
also add upgrade/deployment notes documenting the extender requirement and
mention requiredCapabilities ["set_credits"] so operators are aware.
src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs (1)

145-253: ⚠️ Potential issue | 🟠 Major

Add unit tests for RuntimeAdapter's attach variant resolution and mode probing orchestration.

RuntimeAdapter now orchestrates variant resolution (lines 155–168) and runtime mode probing (lines 206–212) in AttachAsync, attaching diagnostic metadata (runtimeModeHint, runtimeModeEffective, runtimeModeReasonCode, etc.). Per coding guidelines, attach/mode/resolution behavior changes in runtime files must include tests.

RuntimeAttachSmokeTests.cs exists but only verifies basic attach/detach with an explicit profile ID, skipping the variant resolution code path entirely. Component-level tests cover ProfileVariantResolver, RuntimeModeProbeResolver, and BackendRouter in isolation, but no integration test verifies their orchestration within RuntimeAdapter or the routing dispatch in ExecuteAsync (lines 828–860).

Add tests that verify:

  • Universal profile resolution path in AttachAsync
  • Mode probing metadata attachment and effective mode selection
  • ExecuteAsync backend routing dispatch with capability probing fallback
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs` around lines 145 - 253,
Add integration unit tests for RuntimeAdapter to cover the universal profile
variant resolution and runtime mode probing orchestration and ExecuteAsync
routing: write tests that call AttachAsync (via Run/Arrange/Assert) with
profileId equal to ProfileVariantResolver.UniversalProfileId and a mocked
_profileVariantResolver that returns a ProfileVariantResolution with
ResolvedProfileId, ReasonCode, Confidence and optional FingerprintId, then
assert the returned AttachSession (or CurrentSession) process metadata contains
"requestedProfileId", "resolvedVariant", "resolvedVariantReasonCode",
"resolvedVariantConfidence" (and fingerprint when provided); also mock
_runtimeModeProbeResolver to return a ModeProbe with HintMode, EffectiveMode,
ReasonCode and signal counts and assert metadata keys "runtimeModeHint",
"runtimeModeEffective", "runtimeModeReasonCode", "runtimeModeTacticalSignals",
"runtimeModeGalacticSignals" and that the session process Mode equals the
EffectiveMode; finally add a test for ExecuteAsync that sets up mocked backends
and capability probing fallback (mock router/resolver to force capability probe
path) and assert ExecuteAsync dispatches to the expected backend method (or
route) when capability probing is used. Use dependency injection/mocks for
_profileRepository, _signatureResolver, _runtimeModeProbeResolver,
_profileVariantResolver and backend router, and verify metadata on the
AttachSession.Process.Metadata and routing behavior from ExecuteAsync.

Comment on lines 30 to 49
class NamedPipeBridgeServer {
public:
using Handler = std::function<BridgeResult(const BridgeCommand&)>;

explicit NamedPipeBridgeServer(std::string pipeName);

void setHandler(Handler handler);
bool start();
void stop();
bool running() const noexcept;

private:
void runLoop();
BridgeResult handleRawCommand(const std::string& jsonLine) const;

std::string pipeName_;
Handler handler_;
std::atomic<bool> running_ {false};
std::thread worker_;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the implementation file and check for destructor
find . -name "NamedPipeBridgeServer.cpp" -o -name "*NamedPipeBridgeServer*.cpp" | head -20

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 128


🏁 Script executed:

#!/bin/bash
# Check for destructor in the implementation file
grep -n "~NamedPipeBridgeServer\|destructor" ./native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp
# Also show overall file to understand structure
wc -l ./native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 132


🏁 Script executed:

#!/bin/bash
# Read the entire cpp file to check for destructor and thread handling
cat -n ./native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 10176


Missing destructor — std::thread worker_ will call std::terminate() if joinable at destruction.

NamedPipeBridgeServer holds a std::thread worker_ but declares no destructor. If the object is destroyed without calling stop() first, the thread remains joinable and the default destructor will invoke std::terminate(), crashing the application.

While stop() properly joins the thread when called (lines 160–162 of the .cpp file), there is no guarantee it will be invoked before destruction. Add an explicit destructor to the header that ensures the thread is cleaned up.

Proposed fix
     bool start();
     void stop();
     bool running() const noexcept;
+    ~NamedPipeBridgeServer();
 
 private:

And implement in the .cpp file:

NamedPipeBridgeServer::~NamedPipeBridgeServer() {
    stop();
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@native/SwfocExtender.Bridge/include/swfoc_extender/bridge/NamedPipeBridgeServer.hpp`
around lines 30 - 49, The class NamedPipeBridgeServer lacks a destructor and its
member std::thread worker_ can be joinable at object destruction causing
std::terminate; add an explicit destructor (~NamedPipeBridgeServer) to the class
declaration and implement it to call stop() so the runLoop/worker_ is stopped
and joined safely (ensure stop() is noexcept-safe to call from a destructor and
references handler_/running_ correctly); update the header to declare
~NamedPipeBridgeServer() and the .cpp to define it as calling stop().

Comment on lines 135 to 163
void NamedPipeBridgeServer::stop() {
if (!running_.exchange(false)) {
return;
}

#if defined(_WIN32)
auto fullPipeName = std::string("\\\\.\\pipe\\") + pipeName_;
auto deadline = std::chrono::steady_clock::now() + std::chrono::milliseconds(800);
while (std::chrono::steady_clock::now() < deadline) {
auto client = CreateFileA(
fullPipeName.c_str(),
GENERIC_READ | GENERIC_WRITE,
0,
nullptr,
OPEN_EXISTING,
0,
nullptr);
if (client != INVALID_HANDLE_VALUE) {
CloseHandle(client);
break;
}
std::this_thread::sleep_for(std::chrono::milliseconds(25));
}
#endif

if (worker_.joinable()) {
worker_.join();
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

stop() can deadlock: if the busy-poll times out before successfully connecting, worker_.join() blocks indefinitely while the worker is still in ConnectNamedPipe.

The 800 ms window assumes the pipe is created and available within that time. If CreateNamedPipeA is failing on the worker side (sleep-retry path, lines 237–239), the pipe may not exist yet when stop() gives up — and worker_.join() will hang. The idiomatic fix for Windows is CancelSynchronousIo, which directly cancels the blocking ConnectNamedPipe on the worker thread, causing it to return with ERROR_OPERATION_ABORTED. The existing !connected branch already handles this gracefully (close, continue, then running_.load() returns false).

🔧 Proposed fix using CancelSynchronousIo
 void NamedPipeBridgeServer::stop() {
     if (!running_.exchange(false)) {
         return;
     }
 
 `#if` defined(_WIN32)
-    auto fullPipeName = std::string("\\\\.\\pipe\\") + pipeName_;
-    auto deadline = std::chrono::steady_clock::now() + std::chrono::milliseconds(800);
-    while (std::chrono::steady_clock::now() < deadline) {
-        auto client = CreateFileA(
-            fullPipeName.c_str(),
-            GENERIC_READ | GENERIC_WRITE,
-            0,
-            nullptr,
-            OPEN_EXISTING,
-            0,
-            nullptr);
-        if (client != INVALID_HANDLE_VALUE) {
-            CloseHandle(client);
-            break;
-        }
-        std::this_thread::sleep_for(std::chrono::milliseconds(25));
-    }
+    if (worker_.joinable()) {
+        CancelSynchronousIo(static_cast<HANDLE>(worker_.native_handle()));
+    }
 `#endif`
 
     if (worker_.joinable()) {
         worker_.join();
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 135 -
163, stop() can deadlock because the busy-poll may time out before the worker's
CreateNamedPipeA/ConnectNamedPipe runs, leaving the worker blocked in
ConnectNamedPipe; to fix, on Windows call CancelSynchronousIo on the worker
thread's native handle before calling worker_.join() so the blocked
ConnectNamedPipe returns with ERROR_OPERATION_ABORTED and the worker can exit;
update NamedPipeBridgeServer::stop() to, after the CreateFileA polling loop and
before worker_.join(), check if worker_.joinable() and running_.load()==false
and then call CancelSynchronousIo(worker_.native_handle()) (guarded by `#if`
defined(_WIN32)), optionally check/ignore its return value, then join the
worker; reference symbols: stop(), worker_, ConnectNamedPipe, CreateFileA,
CreateNamedPipeA, running_.exchange/running_.load(), CancelSynchronousIo,
worker_.native_handle().

Comment on lines 90 to 99
var variant = await _profileVariantResolver.ResolveAsync(request.ProfileId, cancellationToken: cancellationToken);
var fingerprint = await _binaryFingerprintService.CaptureFromPathAsync(processPath, processId, cancellationToken);

var anchors = ExtractResolvedAnchors(request.Context);
var capability = await _capabilityMapResolver.ResolveAsync(
fingerprint,
variant.ResolvedProfileId,
request.OperationId,
anchors,
cancellationToken);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

CaptureFromPathAsync and ResolveAsync on the capability map can throw — unhandled exceptions will bypass diagnostics logging.

Lines 91, 94-99 call external services that may throw (process exited, file not found, I/O error). Unlike other error paths in this method, these exceptions propagate without writing to _sdkDiagnosticsSink, breaking the invariant that every ExecuteAsync call produces a diagnostic record.

🛡️ Proposed fix: wrap in try/catch and emit a diagnostic result
         var variant = await _profileVariantResolver.ResolveAsync(request.ProfileId, cancellationToken: cancellationToken);
-        var fingerprint = await _binaryFingerprintService.CaptureFromPathAsync(processPath, processId, cancellationToken);
 
-        var anchors = ExtractResolvedAnchors(request.Context);
-        var capability = await _capabilityMapResolver.ResolveAsync(
-            fingerprint,
-            variant.ResolvedProfileId,
-            request.OperationId,
-            anchors,
-            cancellationToken);
+        BinaryFingerprint fingerprint;
+        try
+        {
+            fingerprint = await _binaryFingerprintService.CaptureFromPathAsync(processPath, processId, cancellationToken);
+        }
+        catch (Exception ex)
+        {
+            var captureError = new SdkOperationResult(
+                false,
+                $"Failed to capture binary fingerprint: {ex.Message}",
+                CapabilityReasonCode.RuntimeNotAttached,
+                SdkCapabilityStatus.Unavailable);
+            await _sdkDiagnosticsSink.WriteAsync(request, captureError, cancellationToken);
+            return captureError;
+        }
+
+        var anchors = ExtractResolvedAnchors(request.Context);
+        CapabilityResolutionResult capability;
+        try
+        {
+            capability = await _capabilityMapResolver.ResolveAsync(
+                fingerprint,
+                variant.ResolvedProfileId,
+                request.OperationId,
+                anchors,
+                cancellationToken);
+        }
+        catch (Exception ex)
+        {
+            var resolveError = new SdkOperationResult(
+                false,
+                $"Capability resolution failed: {ex.Message}",
+                CapabilityReasonCode.UnknownSdkOperation,
+                SdkCapabilityStatus.Unavailable);
+            await _sdkDiagnosticsSink.WriteAsync(request, resolveError, cancellationToken);
+            return resolveError;
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Core/Services/SdkOperationRouter.cs` around lines 90 - 99,
The calls to _binaryFingerprintService.CaptureFromPathAsync and
_capabilityMapResolver.ResolveAsync can throw and are not currently wrapped,
which allows exceptions to bypass _sdkDiagnosticsSink; update the ExecuteAsync
flow in SdkOperationRouter (around the CaptureFromPathAsync and ResolveAsync
calls) to wrap those awaits in a try/catch that catches Exception, emits a
diagnostic result to _sdkDiagnosticsSink (matching the shape/fields used by
other error paths), and returns/short-circuits the method; ensure you include
relevant context (request.OperationId, request.ProfileId, processId/processPath,
and exception) in the diagnostic and preserve cancellation token usage when
calling the services.

Comment on lines +77 to +100
private static Task<IReadOnlyList<string>> TryGetLoadedModulesAsync(int? processId, CancellationToken cancellationToken)
{
if (!processId.HasValue || processId.Value <= 0)
{
return Task.FromResult<IReadOnlyList<string>>(Array.Empty<string>());
}

try
{
cancellationToken.ThrowIfCancellationRequested();
using var process = Process.GetProcessById(processId.Value);
var modules = process.Modules
.Cast<ProcessModule>()
.Select(x => x.ModuleName)
.Where(x => !string.IsNullOrWhiteSpace(x))
.Distinct(StringComparer.OrdinalIgnoreCase)
.OrderBy(x => x, StringComparer.OrdinalIgnoreCase)
.ToArray();
return Task.FromResult<IReadOnlyList<string>>(modules);
}
catch
{
return Task.FromResult<IReadOnlyList<string>>(Array.Empty<string>());
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Silent catch in TryGetLoadedModulesAsync buries diagnostics, violating the "record actionable reason codes" guideline.

The bare catch at line 97 swallows Win32Exception (elevated privilege required), InvalidOperationException (process exited mid-enumeration), and other process-inspection failures with no trace. Since the method is private static it has no logger, so failures are completely invisible.

Options:

  1. Make the method an instance method (or accept an ILogger parameter) so failures can be logged at Debug.
  2. Bubble the exception up to CaptureFromPathAsync where the logger is available, then handle and log there.

This is especially important given that privilege failures (the most common real-world failure mode) look identical to "no process provided" in the current implementation. Based on learnings, runtime operations must record actionable reason codes in diagnostics.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Runtime/Services/BinaryFingerprintService.cs` around lines
77 - 100, TryGetLoadedModulesAsync currently swallows all exceptions; change it
to surface diagnostics by adding an ILogger parameter (or making it an instance
method so you can access the class logger) and log any exception at Debug with
an actionable reason code and exception details instead of a bare catch;
specifically, update the signature TryGetLoadedModulesAsync(int? processId,
CancellationToken cancellationToken, ILogger logger) (or instance method using
the existing ILogger), call cancellationToken.ThrowIfCancellationRequested() as
before, and in the catch block log logger.LogDebug("TryGetLoadedModulesAsync:
Failed to enumerate modules for PID {processId}; reason={ReasonCode}",
processId, "<Win32Exception|InvalidOperation|Other>") including the actual
exception object, then return the empty list only after logging; alternatively,
if you prefer bubbling, remove the catch and let the exception propagate to
CaptureFromPathAsync where the logger exists and then log the same actionable
reason codes there.

Comment on lines +160 to +174
function Invoke-WslBuild {
param(
[string]$Config
)

$script = Join-Path $repoRoot "tools/native/build-native.sh"
$linuxScript = $script.Replace("\\", "/")
$linuxRoot = ([string]$repoRoot).Replace("\\", "/")
$quotedCommand = "bash '$linuxScript' '$linuxRoot/native/build-wsl' '$Config'"

& wsl.exe -e bash -lc $quotedCommand
if ((Get-LastExitCodeOrZero) -ne 0) {
throw "native build failed for WSL mode."
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, locate and examine the build-native.ps1 file
fd "build-native.ps1" --type f

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 98


🏁 Script executed:

# Also check for build-native.sh that this script references
fd "build-native.sh" --type f

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 97


🏁 Script executed:

# Search for WSL-related path handling patterns in the codebase
rg "wsl\.exe" --type ps1 -B 2 -A 5

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 96


🏁 Script executed:

# Check if there's documentation about WSL or Windows assumptions
fd "AGENTS.md|README.md" --type f

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 263


🏁 Script executed:

# Read the build-native.ps1 file to verify the code snippet
cat -n tools/native/build-native.ps1 | head -180

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 7898


🏁 Script executed:

# Read tools/AGENTS.md for documentation requirements
cat tools/AGENTS.md

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 695


🏁 Script executed:

# Read tools/native/README.md for WSL-specific documentation
cat tools/native/README.md

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 123


🏁 Script executed:

# Search for wsl.exe usage without type filter
rg "wsl\.exe" -B 2 -A 5

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 541


WSL path translation is incomplete — paths must be converted to WSL format, not just backslash-escaped.

Lines 166-167 replace backslashes with forward slashes but omit the drive letter translation (e.g., C:\repo\... becomes C:/repo/..., which is invalid inside WSL; it must be /mnt/c/repo/...). Use wsl.exe wslpath -u to perform correct path conversion:

 function Invoke-WslBuild {
     param(
         [string]$Config
     )
 
     $script = Join-Path $repoRoot "tools/native/build-native.sh"
-    $linuxScript = $script.Replace("\\", "/")
-    $linuxRoot = ([string]$repoRoot).Replace("\\", "/")
+    $linuxScript = (& wsl.exe wslpath -u $script)
+    $linuxRoot = (& wsl.exe wslpath -u ([string]$repoRoot))
     $quotedCommand = "bash '$linuxScript' '$linuxRoot/native/build-wsl' '$Config'"
 
     & wsl.exe -e bash -lc $quotedCommand

Additionally, per coding guidelines, add explicit documentation to the script help or function comments stating that WSL mode assumes WSL is installed and configured, and paths require proper conversion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/native/build-native.ps1` around lines 160 - 174, The Invoke-WslBuild
function currently only replaces backslashes with slashes for $linuxScript and
$linuxRoot, which doesn't convert Windows drive letters to WSL paths; change the
flow to call wsl.exe wslpath -u to convert both $script and $repoRoot into
proper WSL paths (use the converted values instead of the simple Replace results
when building $quotedCommand), ensure you handle the wslpath call failures and
propagate errors to the existing exit-check, and add a brief function
comment/header stating that WSL mode requires WSL installed and that paths are
converted via wslpath -u (mentioning $linuxScript, $linuxRoot, and
$quotedCommand in the comment).

Comment on lines +21 to +24
if ([string]::IsNullOrWhiteSpace($OutputPath)) {
$outputDir = Join-Path $repoRoot "TestResults/research/$RunId"
New-Item -ItemType Directory -Path $outputDir -Force | Out-Null
$OutputPath = Join-Path $outputDir "fingerprint.json"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Output path TestResults/research/<runId>/ violates the mandated TestResults/runs/<runId>/ directory structure.

The coding guideline explicitly requires all per-run output to be isolated under TestResults/runs/<runId>/. This script writes to a different subtree, meaning the CI fingerprint-validation steps and validate-repro-bundle.ps1 that expect the standard path will not find research artifacts, and the isolation guarantee isn't uniformly enforced.

Either migrate to TestResults/runs/<runId>/research/ (keeping it within the mandated structure) or explicitly amend the coding guideline to carve out TestResults/research/ as a recognized second root.

As per coding guidelines: "Per-run output isolation must be maintained under TestResults/runs/<runId>/ directory structure".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/capture-binary-fingerprint.ps1` around lines 21 - 24, The
script sets $outputDir to "TestResults/research/$RunId" which violates the
required per-run isolation; change the default path logic so $outputDir is built
under the mandated TestResults/runs tree (e.g., Join-Path $repoRoot
"TestResults/runs/$RunId/research") and continue to create the directory and set
$OutputPath (using $OutputPath, $outputDir, $repoRoot, $RunId) so downstream
consumers like validate-repro-bundle.ps1 find artifacts in the expected
location.

Comment on lines +1 to +4
param(
[Parameter(Mandatory = $true)][string]$InputPath,
[string]$OutputPath
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Missing deterministic smoke command and explicit failure classification codes.

Two coding-guideline violations:

  1. No smoke command: The script has no embedded usage example or fixture invocation in its header. tools/fixtures/signature_pack_sample.json already exists and should be referenced (e.g., as a # Example: comment or #Requires block) to satisfy the "deterministic fixture or smoke command" requirement.

  2. Failure classification codes absent: The only error path is throw "Input signature pack not found: $InputPath" with no classification code. Use an explicit code (e.g., [ERR_MISSING_INPUT]) consistent with how other tooling scripts in this PR classify failures.

♻️ Suggested header addition
 param(
     [Parameter(Mandatory = $true)][string]$InputPath,
     [string]$OutputPath
 )
+
+# Example (smoke):
+#   pwsh ./tools/research/normalize-signature-pack.ps1 \
+#       -InputPath tools/fixtures/signature_pack_sample.json \
+#       -OutputPath /tmp/normalized_sample.json

And for the failure throw:

-    throw "Input signature pack not found: $InputPath"
+    throw "[ERR_MISSING_INPUT] Input signature pack not found: $InputPath"

As per coding guidelines: "Every tooling script change must include a deterministic fixture or smoke command" and "Failure conditions must use explicit classification codes".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/normalize-signature-pack.ps1` around lines 1 - 4, Add a
deterministic smoke/fixture reference and a failure classification code: update
the script header (the param block) to include an Example/Requires comment that
references the existing fixture tools/fixtures/signature_pack_sample.json (e.g.,
add a "# Example: .\normalize-signature-pack.ps1 -InputPath
tools/fixtures/signature_pack_sample.json" or a `#Requires-style` note) so the
script has a deterministic smoke command, and change the error throw for the
missing input (the existing throw "Input signature pack not found: $InputPath")
to throw a classified message or use a custom error with an explicit code like
"[ERR_MISSING_INPUT]" so the failure path includes the classification code.

generatedAtUtc = [string]$pack.generatedAtUtc
fingerprintId = [string]$pack.fingerprintId
defaultProfileId = [string]$pack.defaultProfileId
sourceProfilePath = [string]$pack.sourceProfilePath
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

null"" coercion silently changes sourceProfilePath semantics.

In PowerShell, [string]$null evaluates to "". When the input pack has "sourceProfilePath": null, the output will serialize as "sourceProfilePath": "" instead of null. Any consumer that checks for absence via $null -eq $pack.sourceProfilePath will get $false — incorrect behaviour. All other fields in the block are required non-nullable, so only sourceProfilePath is affected.

🐛 Proposed fix
-    sourceProfilePath = [string]$pack.sourceProfilePath
+    sourceProfilePath = $pack.sourceProfilePath
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sourceProfilePath = [string]$pack.sourceProfilePath
sourceProfilePath = $pack.sourceProfilePath
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/normalize-signature-pack.ps1` at line 37, The current
assignment sourceProfilePath = [string]$pack.sourceProfilePath coerces $null to
"" and loses null semantics; change it to preserve null by checking
$pack.sourceProfilePath for $null and only casting to [string] when it's
non-null (set sourceProfilePath to $null if $pack.sourceProfilePath is $null);
refer to the variable name sourceProfilePath and the input
pack.sourceProfilePath to locate and update the assignment.

Comment on lines +40 to +42
if ($fingerprint.moduleList -isnot [System.Collections.IEnumerable]) {
Add-ValidationError -Errors $errors -Message "moduleList must be an array"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

[System.Collections.IEnumerable] incorrectly accepts JSON string values as a valid moduleList.

In PowerShell, System.String implements IEnumerable<char>, so for a fingerprint JSON like { "moduleList": "some_string" }, the condition $fingerprint.moduleList -isnot [System.Collections.IEnumerable] evaluates to $false — no error is added, and the strict-mode check silently passes. The intent is clearly to require an array; use [array] instead.

🐛 Proposed fix
-    if ($fingerprint.moduleList -isnot [System.Collections.IEnumerable]) {
+    if ($fingerprint.moduleList -isnot [array]) {
         Add-ValidationError -Errors $errors -Message "moduleList must be an array"
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if ($fingerprint.moduleList -isnot [System.Collections.IEnumerable]) {
Add-ValidationError -Errors $errors -Message "moduleList must be an array"
}
if ($fingerprint.moduleList -isnot [array]) {
Add-ValidationError -Errors $errors -Message "moduleList must be an array"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/validate-binary-fingerprint.ps1` around lines 40 - 42, The check using
$fingerprint.moduleList -isnot [System.Collections.IEnumerable] incorrectly
treats strings as valid because System.String implements IEnumerable; replace
that type test with an explicit array check (e.g. use -isnot [array] on
$fingerprint.moduleList) so non-array JSON values (like strings) cause
Add-ValidationError to be invoked; update the validation near the
$fingerprint.moduleList check and keep the same Add-ValidationError call and
message.

@Prekzursil
Copy link
Owner Author

Merge-Readiness Refresh (2026-02-19)

  • branch: rd/deep-re-universal-sdk
  • deterministic gate rerun: pass
    • dotnet restore SwfocTrainer.sln
    • dotnet build SwfocTrainer.sln -c Release
    • dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName!~SwfocTrainer.Tests.Profiles.Live&FullyQualifiedName!~RuntimeAttachSmokeTests"
    • result: Passed: 129, Failed: 0, Skipped: 0

Affected Profiles

  • base_swfoc
  • aotr_1397421866_swfoc
  • roe_3447786229_swfoc

Authoritative Live Evidence

  • runId: 20260218-213838
  • classification: passed
  • repro JSON: TestResults/runs/20260218-213838/repro-bundle.json
  • repro Markdown: TestResults/runs/20260218-213838/repro-bundle.md
  • route/capability: CAPABILITY_PROBE_PASS
  • hook state: HOOK_ONESHOT
  • selected host: StarWarsG.exe

No merge performed in this pass; PR remains ready for reviewer approval + merge button.

Prekzursil and others added 3 commits February 19, 2026 05:02
Add the native extender bridge/runtime path, backend routing and capability contracts, profile updates, and deterministic tooling/tests needed for strict ROE live validation. Includes Windows native build script hardening for VS18 generator discovery and host artifact verification.

Co-authored-by: Codex <noreply@openai.com>
Align architecture, profile format, external tools setup, live validation runbook, and TODO tracking with the proven-core extender workflow and strict ROE gate acceptance criteria.

Co-authored-by: Codex <noreply@openai.com>
Capture the single accepted live validation run for reconciliation acceptance. Includes strict non-blocked classification and extender route/hook evidence under runId 20260218-213838.

Co-authored-by: Codex <noreply@openai.com>
@Prekzursil Prekzursil force-pushed the rd/deep-re-universal-sdk branch from eb697d3 to c00bfec Compare February 19, 2026 03:05
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c00bfece8c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

var session = await _runtime.AttachAsync(effectiveProfileId);
if (variant is not null)
{
SelectedProfileId = effectiveProfileId;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve universal_auto selection after resolving variant

When attach starts from universal_auto, this assignment permanently replaces the user’s selected profile with the resolved concrete variant. After detaching, subsequent attaches no longer re-run universal resolution, so switching to a different mod/process can reuse a stale profile and trigger profile mismatch/hard-fail behavior instead of auto-detecting again. Keeping the UI selection as universal_auto avoids regressing the auto-resolution workflow.

Useful? React with 👍 / 👎.

ExecutionKind.Save => await ExecuteSaveActionAsync(request, cancellationToken),
ExecutionKind.CodePatch => await ExecuteCodePatchActionAsync(request, cancellationToken),
ExecutionKind.Freeze => new ActionExecutionResult(false, "Freeze actions must be handled by the orchestrator, not the runtime adapter.", AddressSource.None),
ExecutionKind.Sdk => await ExecuteSdkActionAsync(request, cancellationToken),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Honor non-extender backend decisions for SDK-tagged actions

This branch routes ExecutionKind.Sdk back into ExecuteSdkActionAsync even when the backend router has already selected the legacy memory path, so a profile forcing backendPreference: "memory"/"helper" for set_credits will still go through SDK gating and can fail with SDK-specific errors instead of executing via the chosen backend. In practice this makes backend preference overrides ineffective for SDK-tagged actions.

Useful? React with 👍 / 👎.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 020608e093

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".


using var client = new NamedPipeClientStream(
serverName: ".",
pipeName: DefaultPipeName,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Read configured extender pipe name before connecting

NamedPipeExtenderBackend always connects to the hard-coded DefaultPipeName in SendCoreAsync, so runtime probe/execute calls fail whenever the bridge host is started with a non-default pipe (the host supports SWFOC_EXTENDER_PIPE_NAME override in BridgeHostMain.cpp). In those environments every extender-routed action degrades to timeout/unreachable behavior even though the host is healthy, so the client should resolve the pipe name from the same env var before creating NamedPipeClientStream.

Useful? React with 👍 / 👎.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tools/schemas/repro-bundle.schema.json (1)

25-25: ⚠️ Potential issue | 🟡 Minor

schemaVersion should be bumped — five new required fields constitute a breaking schema change.

Adding selectedHostProcess, backendRouteDecision, capabilityProbeSnapshot, hookInstallReport, and overlayState to the required array means any repro bundle without these fields (captured before this PR) will now fail validation against a schema still pinned at "const": "1.0". Bumping to "1.1" (or "2.0") and updating the const in the validator would make the version boundary explicit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/schemas/repro-bundle.schema.json` at line 25, The schemaVersion const
should be bumped to reflect the breaking change: update "schemaVersion" const
from "1.0" to "1.1" (or "2.0") and update any validator code that checks the
schemaVersion constant so it accepts the new version; this change is required
because the newly added required properties selectedHostProcess,
backendRouteDecision, capabilityProbeSnapshot, hookInstallReport, and
overlayState make the schema incompatible with prior bundles, so ensure
functions or modules that validate against the schemaVersion value are updated
to the new "1.1" constant (or the chosen new version) to make the version
boundary explicit.
src/SwfocTrainer.App/ViewModels/MainViewModel.cs (1)

671-688: ⚠️ Potential issue | 🟡 Minor

Status message doesn't reflect the universal_auto override.

When universal_auto is present and wins (Lines 677–680), the status at Line 686 still shows recommended: <old profile> even though SelectedProfileId is now "universal_auto". This is a minor UX discrepancy that may confuse users.

🐛 Proposed fix
+        var selectedForDisplay = SelectedProfileId ?? recommended;
         Status = !string.IsNullOrWhiteSpace(recommended)
-            ? $"Loaded {Profiles.Count} profiles (recommended: {recommended})"
+            ? $"Loaded {Profiles.Count} profiles (selected: {selectedForDisplay}, recommended: {recommended})"
             : $"Loaded {Profiles.Count} profiles";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.App/ViewModels/MainViewModel.cs` around lines 671 - 688, The
status message can show a stale "recommended" value when UniversalProfileId
wins; update the logic that sets Status to reflect the actual selected profile
by using SelectedProfileId (or compute a displayRecommended variable) instead of
the original recommended variable—i.e., after resolving
resolvedProfileId/SelectedProfileId (and after the UniversalProfileId check),
build the status string using SelectedProfileId when showing the chosen profile
so it will show "recommended: universal_auto" when UniversalProfileId was
selected.
🟠 Major comments (20)
TestResults/runs/20260218-213838/20260218-213838-live-hero-helper.trx-2-2 (1)

2-2: ⚠️ Potential issue | 🟠 Major

Developer PII (username, machine name, and local absolute paths) committed to public repository.

The following attributes embed personal identifiable information that will be permanently stored in git history:

Attribute Value
runUser (line 2) DESKTOP-A60S9GP\Prekzursil
runDeploymentRoot (line 5) Prekzursil_DESKTOP-A60S9GP_2026-02-18_23_39_54
storage (line 16) full local path under c:\users\prekzursil\...
codeBase (line 18) full local path under C:\Users\Prekzursil\...

If this artifact (and similar TRX files in this PR) must be committed, consider either:

  1. Post-processing the TRX before committing to scrub machine-specific fields, or
  2. Adding TestResults/runs/**/*.trx to .gitignore and committing only the aggregated live-validation-summary.json (which presumably does not carry these fields).

Also applies to: 5-5, 16-16, 18-18

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TestResults/runs/20260218-213838/20260218-213838-live-hero-helper.trx` at
line 2, The TRX contains developer PII in attributes runUser, runDeploymentRoot,
storage, and codeBase; remove or neutralize those values in
TestResults/runs/20260218-213838/20260218-213838-live-hero-helper.trx (replace
with generic placeholders or strip the attributes) and re-commit the sanitized
TRX, and to prevent recurrence add an ignore rule for TRX artifacts (e.g.
TestResults/runs/**/*.trx) to .gitignore and stop tracking existing TRX files
(remove them from the index so they are not committed going forward).
TestResults/runs/20260218-213838/20260218-213838-live-tactical.trx-8-8 (1)

8-8: ⚠️ Potential issue | 🟠 Major

startTime/endTime timestamps are inconsistent with duration — live evidence artifact data integrity gap.

duration="00:00:34.2247037" (34.2 s) is correct and consistent with both the xUnit Finished log at 00:00:34.42 and the run-level start/finish wall clock (Lines 3, 35). However, startTime and endTime differ by only ~35 µs — essentially identical — because the xUnit VSTest adapter flushes per-test timestamps at result-collection time rather than at actual test start/end boundaries.

This means the per-test timestamps cannot be used to cross-correlate against the live-process diagnostics cited in the PR (selectedHostProcess, backendRouteDecision, capabilityProbeSnapshot, hookInstallReport). For an artifact that is the authoritative live-validation evidence, this reduces its forensic value. Consider enriching the test with explicit wall-clock markers logged to StdOut, or using a wrapper that captures OS-level start/end timestamps and injects them into the repro bundle rather than relying on VSTest's per-test timestamps.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TestResults/runs/20260218-213838/20260218-213838-live-tactical.trx` at line
8, The per-test startTime/endTime written by the VSTest adapter are not true
wall-clock boundaries (they're flushed at result collection), so the test
SwfocTrainer.Tests.Profiles.LiveTacticalToggleWorkflowTests.Tactical_Toggles_Should_Execute_And_Revert_When_Tactical_Mode
must emit reliable OS-level timestamps: modify that test (or its test
fixture/wrapper) to capture high-resolution DateTime.UtcNow (or Stopwatch ticks)
at the actual test start and end and write them to StdOut (or into the test
result payload) so they become part of the run artifact; alternatively implement
a thin wrapper around the test execution that records OS timestamps and injects
them into the repro bundle/trace rather than relying on VSTest’s
startTime/endTime.
TestResults/runs/20260218-213838/20260218-213838-live-tactical.trx-2-2 (1)

2-2: ⚠️ Potential issue | 🟠 Major

PII (username + hostname) committed to repository.

runUser="DESKTOP-A60S9GP\Prekzursil" embeds the developer's Windows username and machine hostname directly in a committed artifact. Combined with the same values in computerName (Line 8) and the storage/codeBase paths (Lines 16–18), the full local identity is permanently written into Git history on a public repository.

Consider scrubbing or templating these fields before committing (e.g. replace with REDACTED or a CI-neutral placeholder), or generate the TRX in a sanitised form via a post-processing script in the tooling pipeline.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TestResults/runs/20260218-213838/20260218-213838-live-tactical.trx` at line
2, The committed TRX embeds local PII in the TestRun element (attributes runUser
and computerName and paths in storage/codeBase); sanitize these fields by
replacing the values with a neutral placeholder (e.g., REDACTED or CI_MACHINE)
before committing or emitting the file, and add a post-processing sanitiser in
your test pipeline that locates the TestRun element and overwrites runUser,
computerName, storage and codeBase attributes (or removes them) so no local
username/hostname or local paths are written to Git history.
native/SwfocExtender.Overlay/include/swfoc_extender/overlay/OverlayState.hpp-11-11 (1)

11-11: ⚠️ Potential issue | 🟠 Major

visible_ is not thread-safe — use std::atomic<bool>.

In a game overlay, setVisible is typically driven by an input/hotkey thread while visible() is polled by the render thread. Accessing a plain bool across threads without synchronization is undefined behaviour in C++. The fix is a one-line change and doesn't affect the noexcept contract.

🔒 Proposed fix
+#include <atomic>
+
 namespace swfoc::extender::overlay {

 class OverlayState {
 public:
     void setVisible(bool visible) noexcept;
     bool visible() const noexcept;

 private:
-    bool visible_ {false};
+    std::atomic<bool> visible_ {false};
 };

With std::atomic<bool>, the setter/getter implementations in OverlayState.cpp remain correct as-is — operator= and implicit operator bool use the default seq_cst ordering, which is safe for a simple visibility flag. If relaxed ordering is acceptable for performance, use store/load explicitly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Overlay/include/swfoc_extender/overlay/OverlayState.hpp`
at line 11, The member visible_ in class OverlayState is accessed from different
threads and must be made thread-safe: change its type from bool to
std::atomic<bool> (add `#include` <atomic>) so reads/writes are atomic; no changes
needed to the existing setVisible / visible methods in OverlayState.cpp since
atomic<bool>'s default operations provide the required ordering (or you can use
store/load with an explicit memory_order if you prefer).
TestResults/runs/20260218-213838/launch-context-fixture.json-4-4 (1)

4-4: ⚠️ Potential issue | 🟠 Major

Developer's local username and machine path committed to source control; path also has a double-escaping bug.

Two distinct issues on this line:

1. PII / path leakage (major): The detail field contains a hardcoded local filesystem path that embeds the developer's OS username: C:\\Users\\Prekzursil\\AppData\\Local\\.... Committing personal local paths to source control leaks filesystem/username information and becomes a permanent part of git history. The value should be redacted or replaced with a sanitised placeholder such as <python-executable> or a path relative to a well-known env var (e.g., %LOCALAPPDATA%\\...).

2. Malformed path from double-escaping (minor): After JSON string parsing, \\ becomes \ and \\\\ becomes \\. The segment Programs\\\\Python\\\\Python312\\\\python.exe decodes to Programs\\Python\\Python312\\python.exe — literal double-backslashes — which is not a valid Windows path. The first half of the path uses correct single \\ escaping while the second half uses \\\\. This is a classic PowerShell ConvertTo-Json artefact when the input string was already escaped. The path serialisation should use a single level of JSON escaping throughout.

🛡️ Proposed fix — redact PII and fix escaping
-    "detail":  "python produced no output. executable: C:\\Users\\Prekzursil\\AppData\\Local\\Programs\\\\Python\\\\Python312\\\\python.exe",
+    "detail":  "python produced no output. executable: %LOCALAPPDATA%\\Programs\\Python\\Python312\\python.exe",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TestResults/runs/20260218-213838/launch-context-fixture.json` at line 4, The
"detail" string in the JSON contains a developer's local path (PII) and a
double-escaping artifact; update the serialization that produces the "detail"
field so it emits a sanitized value (e.g., "<python-executable>" or a
platform-env-based placeholder like "%LOCALAPPDATA%\\...") instead of the full
C:\\Users\\... path, and ensure JSON string escaping is performed only once so
backslashes are single-escaped (use the actual path string not an
already-escaped string before calling JSON serialization). Target the code that
sets the "detail" field (the producer that writes launch-context-fixture.json /
the value assigned to "detail") and replace hardcoded local paths with a
redaction/placeholder and fix the escaping pipeline to avoid double-escaped
backslashes.
native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp-20-31 (1)

20-31: ⚠️ Potential issue | 🟠 Major

HookLifecycleManager has no thread-safety guarantees — concurrent access to hooks_ would be undefined behaviour.

markInstalled, markFailed, markRolledBack, and get all read/write a shared std::unordered_map with no synchronisation. If called from more than one thread, the result is a data race and UB.

Add protection using std::shared_mutex (shared lock for get, exclusive lock for the three mutators), or document explicitly that the class must be accessed from a single thread only.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp`
around lines 20 - 31, HookLifecycleManager currently accesses the shared
unordered_map hooks_ from multiple methods without synchronization; add a
std::shared_mutex member (e.g., mutex_) and protect access: use
std::unique_lock<std::shared_mutex> in mutators markInstalled, markFailed,
markRolledBack and use std::shared_lock<std::shared_mutex> in get to allow
concurrent reads; also include the appropriate header(s) (<shared_mutex>) and
ensure the lock objects are created at the start of each method to guard hooks_.
tools/research/extract-pe-metadata.ps1-10-12 (2)

10-12: ⚠️ Potential issue | 🟠 Major

Failure condition missing explicit classification code.

As per coding guidelines, "Failure conditions must use explicit classification codes." The bare throw "Module not found: ..." gives no structured code for callers or CI pipelines to key on.

🔧 Proposed fix
-    throw "Module not found: $moduleFullPath"
+    throw "[MODULE_NOT_FOUND] Module not found: $moduleFullPath"

As per coding guidelines: "Failure conditions must use explicit classification codes."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/extract-pe-metadata.ps1` around lines 10 - 12, Replace the
bare throw in the module existence check with a failure that includes an
explicit classification code so callers/CI can parse it: when Test-Path for
$moduleFullPath fails, throw or write an error that prefixes a machine-readable
code (e.g. ERR_MODULE_NOT_FOUND or E_MODULE_NOT_FOUND) along with the
descriptive message and ensure the thrown object/message includes both the code
and $moduleFullPath (update the throw/writing location around the Test-Path
check referencing $moduleFullPath).

10-12: ⚠️ Potential issue | 🟠 Major

Failure condition missing explicit classification code.

The throw emits a plain prose string. As per coding guidelines, "Failure conditions must use explicit classification codes." Downstream scripts and CI pipelines cannot key on a structured code for routing or suppression.

🔧 Proposed fix
-    throw "Module not found: $moduleFullPath"
+    throw "[MODULE_NOT_FOUND] Module not found: $moduleFullPath"

As per coding guidelines: "Failure conditions must use explicit classification codes."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/extract-pe-metadata.ps1` around lines 10 - 12, Replace the
plain string throw with a structured failure that includes an explicit
classification code; in the file's existence check where the script currently
does throw "Module not found: $moduleFullPath", instead throw an exception whose
message begins with a stable classification token (e.g. "MODULE_NOT_FOUND") or
construct an ErrorRecord/Exception with a Code-like field, include
$moduleFullPath in the message, and throw that object so downstream tooling can
reliably parse the classification (locate the throw that references
$moduleFullPath in extract-pe-metadata.ps1 and update it accordingly).
tools/research/extract-pe-metadata.ps1-1-38 (2)

1-38: ⚠️ Potential issue | 🟠 Major

No deterministic fixture or smoke command included.

As per coding guidelines, "Every tooling script change must include a deterministic fixture or smoke command." The script ships with no inline example, no sibling fixture file, and no CI smoke step that invokes it against a known-good binary and asserts the output schema. Add a smoke invocation at minimum as a comment block, or a tests/fixtures/smoke-extract-pe-metadata.ps1:

# Smoke example — run from repo root against a committed fixture binary:
# .\tools\research\extract-pe-metadata.ps1 `
#     -ModulePath  "tests/fixtures/sample.exe" `
#     -OutputPath  "TestResults/runs/smoke/sample.metadata.json"
# $out = Get-Content "TestResults/runs/smoke/sample.metadata.json" | ConvertFrom-Json
# if ($out.schemaVersion -ne "1.0")  { throw "[SMOKE_SCHEMA_MISMATCH] schemaVersion unexpected" }
# if (-not $out.fileVersion)         { throw "[SMOKE_FIELD_EMPTY] fileVersion is empty" }

As per coding guidelines: "Every tooling script change must include a deterministic fixture or smoke command."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/extract-pe-metadata.ps1` around lines 1 - 38, The script
tools/research/extract-pe-metadata.ps1 lacks the required deterministic
smoke/fixture; add a minimal reproducible smoke invocation and assertion either
as a commented example at top of the script or as a separate
tests/fixtures/smoke-extract-pe-metadata.ps1 that calls the script with
-ModulePath and -OutputPath, reads the produced JSON and asserts
result.schemaVersion == "1.0" and that result.fileVersion (or another required
field) is non-empty; reference the param names ModulePath and OutputPath, the
output file schemaVersion field, and the conversion step (ConvertFrom-Json) in
the assertions so CI or a reviewer can run the smoke and validate the schema.

1-38: ⚠️ Potential issue | 🟠 Major

No deterministic fixture or smoke command included.

As per coding guidelines, every tooling script change must include a deterministic fixture or smoke command. Neither the script nor its diff adds an inline example invocation, a sibling fixture file, or a CI smoke step that validates the script's output against a known-good artifact.

A minimal approach is a smoke invocation comment block at the bottom, or a companion tests/fixtures/extract-pe-metadata.smoke.ps1:

# Smoke example (run from repo root, output verified against schema):
# .\tools\research\extract-pe-metadata.ps1 `
#   -ModulePath "tests/fixtures/sample.exe" `
#   -OutputPath "TestResults/runs/smoke/sample.metadata.json"
# $out = Get-Content "TestResults/runs/smoke/sample.metadata.json" | ConvertFrom-Json
# if ($out.schemaVersion -ne "1.0") { throw "[SMOKE_FAIL] schemaVersion mismatch" }

As per coding guidelines: "Every tooling script change must include a deterministic fixture or smoke command."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/extract-pe-metadata.ps1` around lines 1 - 38, Add a
deterministic smoke invocation for the extract-pe-metadata.ps1 script: include a
commented example block at the bottom of extract-pe-metadata.ps1 showing how to
run Extract-PE-Metadata (ModulePath and OutputPath parameters), then read and
validate the produced JSON (ConvertFrom-Json) and assert schemaVersion == "1.0"
(throw on mismatch); alternatively add a small companion fixture file
tests/fixtures/extract-pe-metadata.smoke.ps1 that performs the same
deterministic run against a committed sample (e.g., tests/fixtures/sample.exe)
and exits non-zero on failure so CI or reviewers can run the smoke test.
tools/research/extract-pe-metadata.ps1-1-4 (2)

1-4: ⚠️ Potential issue | 🟠 Major

Missing script help block and Windows-only assumption documentation.

[System.Diagnostics.FileVersionInfo]::GetVersionInfo only yields meaningful ProductName, FileVersion, OriginalFilename, etc. for PE (Windows) binaries. Running this against a non-PE file silently emits null/empty fields with no error. As per coding guidelines, "Windows live target assumptions must be kept explicit in documentation and script help." Add at least a #Requires comment or a comment-based help block with a .NOTES section calling this out, and add [CmdletBinding()] to enable -Verbose.

📝 Proposed minimal help block
+<#
+.SYNOPSIS
+    Extracts PE (Portable Executable) metadata from a Windows binary and writes it as structured JSON.
+
+.DESCRIPTION
+    Reads FileVersionInfo and file-system properties from a Windows PE binary and emits a
+    schema-versioned JSON artifact. Only meaningful on PE/Windows targets; non-PE binaries
+    will yield empty version fields without error.
+
+.PARAMETER ModulePath
+    Path to the target PE module (e.g. StarWarsG.exe).
+
+.PARAMETER OutputPath
+    Optional. Destination JSON path. Defaults to <ModulePath-without-extension>.metadata.json.
+
+.NOTES
+    WINDOWS-ONLY ASSUMPTION: FileVersionInfo fields are PE-specific. Run on Windows hosts only.
+    Output isolation: pass an explicit -OutputPath rooted under TestResults/runs/<runId>/ for
+    per-run artifact tracking.
+#>
+[CmdletBinding()]
 param(
     [Parameter(Mandatory = $true)][string]$ModulePath,
     [string]$OutputPath
 )

As per coding guidelines: "Windows live target assumptions must be kept explicit in documentation and script help."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/extract-pe-metadata.ps1` around lines 1 - 4, Add a
comment-based help block and make the Windows-only assumption explicit, and
enable -Verbose: add [CmdletBinding()] above the param(...) block to enable
-Verbose, then add a comment-based help block (Synopsis, Description, Parameters
for ModulePath and OutputPath, and a .NOTES section) that calls out that this
script targets Windows PE files and that
[System.Diagnostics.FileVersionInfo]::GetVersionInfo will return null/empty for
non-PE files; alternatively include a top-level `#Requires` comment indicating
Windows, and mention the ModulePath/OutputPath parameters and the reliance on
[System.Diagnostics.FileVersionInfo]::GetVersionInfo in the .NOTES so callers
know the Windows-only assumption.

1-4: ⚠️ Potential issue | 🟠 Major

Add a script help block documenting the Windows/PE-only assumption.

[System.Diagnostics.FileVersionInfo]::GetVersionInfo fields (ProductName, FileVersion, OriginalFilename, etc.) are only meaningful for PE (Windows) binaries. A non-PE binary silently emits null/empty fields without error. There is also no [CmdletBinding()], so common parameters like -Verbose, -ErrorAction, and -WhatIf are unavailable. As per coding guidelines, "Windows live target assumptions must be kept explicit in documentation and script help."

📝 Proposed help block + CmdletBinding
+<#
+.SYNOPSIS
+    Extracts PE (Portable Executable) metadata from a Windows binary and writes it as structured JSON.
+
+.DESCRIPTION
+    Reads FileVersionInfo and file-system properties from a Windows PE binary and emits a
+    schema-versioned JSON artifact (schemaVersion 1.0).
+    NOTE: Windows PE/Windows-only assumption — FileVersionInfo fields are PE-specific. Non-PE
+    binaries will yield empty version fields without error. Must be run on a Windows host.
+
+.PARAMETER ModulePath
+    Path to the target PE module (e.g. StarWarsG.exe).
+
+.PARAMETER OutputPath
+    Optional. Destination JSON path. Defaults to <ModulePath-without-extension>.metadata.json.
+    For per-run artifact tracking, supply a path rooted under TestResults/runs/<runId>/.
+
+.NOTES
+    Output isolation: the default path places the artifact alongside the source binary.
+    Pass an explicit -OutputPath under TestResults/runs/<runId>/ when used in validation runs.
+#>
+[CmdletBinding()]
 param(
     [Parameter(Mandatory = $true)][string]$ModulePath,
     [string]$OutputPath
 )

As per coding guidelines: "Windows live target assumptions must be kept explicit in documentation and script help."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/extract-pe-metadata.ps1` around lines 1 - 4, Add a PowerShell
help block and CmdletBinding to the script: document that the script operates on
Windows PE binaries only and that
[System.Diagnostics.FileVersionInfo]::GetVersionInfo fields (ProductName,
FileVersion, OriginalFilename, etc.) will be null/empty for non-PE files;
include examples for the mandatory parameter ModulePath and optional OutputPath,
and add [CmdletBinding()] at the top so common parameters like -Verbose,
-ErrorAction, and -WhatIf are available; ensure the help block clearly states
the Windows/PE-only assumption and expected behaviors when non-PE input is
provided.
tools/research/extract-pe-metadata.ps1-37-37 (2)

37-37: ⚠️ Potential issue | 🟠 Major

Set-Content -Encoding UTF8 writes a BOM on Windows PowerShell 5.x, breaking JSON consumers.

The official Microsoft documentation confirms that in PowerShell 5.1, UTF8 encoding means "UTF-8 (with BOM)". The utf8NoBOM value is not available in PS 5.1 — it was introduced in PowerShell 6. A BOM-prefixed JSON file can silently fail or be misparsed by jq, ConvertFrom-Json in some hosts, and any byte-level JSON parser that doesn't strip the BOM preamble. The safe cross-version fix is [System.IO.File]::WriteAllText with an explicit no-BOM encoding instance.

🔧 Proposed fix
-($result | ConvertTo-Json -Depth 6) | Set-Content -Path $OutputPath -Encoding UTF8
+$json = $result | ConvertTo-Json -Depth 6
+[System.IO.File]::WriteAllText($OutputPath, $json, [System.Text.UTF8Encoding]::new($false))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/extract-pe-metadata.ps1` at line 37, The current write uses
Set-Content with -Encoding UTF8 which produces a BOM on PowerShell 5.x; replace
that write with a call to [System.IO.File]::WriteAllText using the JSON string
produced by ConvertTo-Json ($result | ConvertTo-Json -Depth 6) and an explicit
no-BOM encoding instance (e.g. New-Object System.Text.UTF8Encoding($false)) so
the file at $OutputPath is written as UTF‑8 without a BOM; update the code that
currently calls Set-Content to build the JSON string then call WriteAllText with
the no‑BOM encoding.

37-37: ⚠️ Potential issue | 🟠 Major

Set-Content -Encoding UTF8 writes a UTF-8 BOM on Windows PowerShell 5.1, breaking JSON consumers downstream.

Windows PowerShell 5.1 writes UTF-8 with BOM (bytes EF BB BF) when using -Encoding UTF8, while PowerShell 7+ defaults to no BOM. JSON parsers (jq, ConvertFrom-Json, Node.js) can fail or misparse with a leading BOM. Use [System.IO.File]::WriteAllText with [System.Text.UTF8Encoding]::new($false) for cross-version correctness and JSON schema compliance.

🔧 Proposed fix
-($result | ConvertTo-Json -Depth 6) | Set-Content -Path $OutputPath -Encoding UTF8
+$json = $result | ConvertTo-Json -Depth 6
+[System.IO.File]::WriteAllText($OutputPath, $json, [System.Text.UTF8Encoding]::new($false))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/extract-pe-metadata.ps1` at line 37, The current use of
Set-Content with -Encoding UTF8 writes a BOM on Windows PowerShell 5.1; replace
the Set-Content call so the JSON produced by ConvertTo-Json (using $result and
$OutputPath) is written without a BOM by calling the .NET API to write the
string (e.g., use [System.IO.File]::WriteAllText) and pass an explicit UTF8
encoding instance that disables the BOM (e.g.,
[System.Text.UTF8Encoding]::new($false)); update the line that references
Set-Content to use this WriteAllText approach so downstream JSON consumers
aren’t broken.
src/SwfocTrainer.Profiles/Services/FileSystemProfileRepository.cs-154-158 (1)

154-158: ⚠️ Potential issue | 🟠 Major

BackendPreference / HostPreference fallback to parent is unreachable due to non-null defaults.

As noted in ProfileModels.cs, the record defaults ("auto", "starwarsg_preferred") mean child.BackendPreference and child.HostPreference are never null/whitespace after deserialization, so lines 155 and 157 always select the child value. The parent's values can never propagate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Profiles/Services/FileSystemProfileRepository.cs` around
lines 154 - 158, The fallback logic for BackendPreference and HostPreference in
FileSystemProfileRepository.cs never uses the parent because
child.BackendPreference and child.HostPreference are initialized to non-empty
defaults; update the ternaries to treat those default sentinel values as "not
set". Specifically, when constructing the merged profile, replace the checks
string.IsNullOrWhiteSpace(child.BackendPreference) and
string.IsNullOrWhiteSpace(child.HostPreference) with comparisons that detect the
record defaults (e.g., child.BackendPreference ==
ProfileDefaults.BackendPreferenceDefault or child.BackendPreference == "auto",
and child.HostPreference == ProfileDefaults.HostPreferenceDefault or
child.HostPreference == "starwarsg_preferred") and fall back to
parent.BackendPreference / parent.HostPreference when the child equals the
sentinel default.
src/SwfocTrainer.Core/Models/ProfileModels.cs-46-50 (1)

46-50: ⚠️ Potential issue | 🟠 Major

Default values for BackendPreference and HostPreference prevent child profiles from inheriting parent values.

The merge logic (lines 155–157 of FileSystemProfileRepository) uses string.IsNullOrWhiteSpace() to decide whether to use a child's value or fall back to the parent's. Since BackendPreference defaults to "auto" and HostPreference defaults to "starwarsg_preferred", any child profile omitting these fields will be deserialized with the default non-empty strings. The merge will then treat these defaults as explicitly set values, making it impossible to inherit from the parent.

While all current profiles explicitly set both fields (no inheritance is attempted), this is a fragile design that prevents inheritance semantics from working correctly. Consider either:

  1. Using null as the default and applying "auto" / "starwarsg_preferred" as runtime defaults downstream, or
  2. Updating the merge logic to check if the child value equals the record default before deciding to fall back to the parent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Core/Models/ProfileModels.cs` around lines 46 - 50, The
record defaults for BackendPreference ("auto") and HostPreference
("starwarsg_preferred") on ProfileModels.cs cause child profiles to appear
explicitly set and block inheritance in FileSystemProfileRepository's merge
logic (used around the merge method that checks string.IsNullOrWhiteSpace on the
child). Fix by making those properties nullable defaults (BackendPreference =
null, HostPreference = null) so omitted fields deserialize as null and
inheritance works, and ensure any runtime use that needs
"auto"/"starwarsg_preferred" applies those as runtime defaults; alternatively,
if you prefer code-change in merging, update the merge in
FileSystemProfileRepository to treat the record default values as "unset" when
deciding to fall back to parent (i.e., check for null or equal to the record
default before using child value).
src/SwfocTrainer.Runtime/Services/RuntimeModeProbeResolver.cs-101-114 (1)

101-114: ⚠️ Potential issue | 🟠 Major

CountSignals uses a negative health-status filter that admits degraded symbols as valid signals.

The condition skips a symbol only if its HealthStatus is Unresolved. SymbolHealthStatus has three values: Healthy, Degraded, and Unresolved. The current filter allows Degraded symbols to be counted, but degraded symbols should not contribute to mode determination. The stated intent is to count only symbols with a healthy status. Use a positive guard instead:

🛡️ Proposed fix
-if (info.Address == nint.Zero || info.HealthStatus == SymbolHealthStatus.Unresolved)
+if (info.Address == nint.Zero || info.HealthStatus != SymbolHealthStatus.Healthy)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Runtime/Services/RuntimeModeProbeResolver.cs` around lines
101 - 114, CountSignals currently treats any symbol that is not Unresolved as
valid, which incorrectly counts Degraded symbols; update the guard in
CountSignals to only accept symbols whose info.HealthStatus is
SymbolHealthStatus.Healthy (and still ensure info.Address != nint.Zero and info
!= null). Locate the method CountSignals and replace the negative health check
against Unresolved with a positive check for SymbolHealthStatus.Healthy so only
healthy symbols contribute to the count.
src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs-225-258 (1)

225-258: ⚠️ Potential issue | 🟠 Major

Thread.Sleep(200) inside lock(HostSync) blocks the thread pool and holds the lock.

Two compounding problems:

  1. Lock held for 200 ms: any concurrent caller racing to TryStartBridgeHostProcess stalls on lock acquisition for the entire sleep duration, not just for the process-start check.
  2. Async context: TryStartBridgeHostProcess is reached from an async method (SendAsync). The .Sleep pins the current thread pool thread instead of yielding, risking pool starvation under sustained concurrent usage.

Minimum fix — move the wait outside the lock so other callers can read HasExited immediately:

🐛 Proposed fix (minimum — move sleep outside lock)
-        try
-        {
-            _bridgeHostProcess = Process.Start(new ProcessStartInfo
-            {
-                FileName = hostPath,
-                WorkingDirectory = Path.GetDirectoryName(hostPath) ?? AppContext.BaseDirectory,
-                UseShellExecute = false,
-                CreateNoWindow = true
-            });
-            Thread.Sleep(200);
-            return _bridgeHostProcess is not null && !_bridgeHostProcess.HasExited;
-        }
+        Process? started = null;
+        try
+        {
+            _bridgeHostProcess = Process.Start(new ProcessStartInfo
+            {
+                FileName = hostPath,
+                WorkingDirectory = Path.GetDirectoryName(hostPath) ?? AppContext.BaseDirectory,
+                UseShellExecute = false,
+                CreateNoWindow = true
+            });
+            started = _bridgeHostProcess;
+        }
         catch
         {
             _bridgeHostProcess = null;
             return false;
         }
+    }
+    // Wait for process startup *outside* the lock so concurrent callers are not blocked.
+    if (started is not null)
+    {
+        Thread.Sleep(200);
     }
+    return started is not null && !started.HasExited;
 }

Preferred fix: make TryStartBridgeHostProcess async, replace Thread.Sleep with await Task.Delay(200), and replace the lock with a SemaphoreSlim.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs` around lines
225 - 258, TryStartBridgeHostProcess currently holds lock(HostSync) while
calling Thread.Sleep(200), which blocks thread-pool threads and prevents
concurrent callers (e.g., SendAsync) from observing
_bridgeHostProcess.HasExited; change the method to an async implementation:
replace the lock(HostSync) with a SemaphoreSlim (e.g., _hostSemaphore) awaited
with WaitAsync/Release, replace Thread.Sleep(200) with await Task.Delay(200)
(moved outside the critical section if you keep the lock approach), and ensure
all call sites (including SendAsync) await the async TryStartBridgeHostProcess;
ensure _bridgeHostProcess is still checked/assigned inside the
semaphore-protected section and the post-delay HasExited check happens after the
delay without holding the semaphore longer than necessary.
tools/research/run-capability-intel.ps1-16-18 (1)

16-18: ⚠️ Potential issue | 🟠 Major

Orchestrator output root TestResults/research/<RunId>/ violates the per-run isolation mandate.

This is the top-level orchestrator for the capability intel workflow and sets the $OutputRoot that flows into all three sub-scripts. Fixing it here corrects the path for all artifacts produced by the pipeline (fingerprint.json, signature-pack.json, analysis-notes.md) in a single change.

🐛 Proposed fix
 if ([string]::IsNullOrWhiteSpace($OutputRoot)) {
-    $OutputRoot = Join-Path $repoRoot "TestResults/research/$RunId"
+    $OutputRoot = Join-Path $repoRoot "TestResults/runs/$RunId/research"
 }

As per coding guidelines: "Per-run output isolation must be maintained under TestResults/runs/<runId>/ directory structure".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/run-capability-intel.ps1` around lines 16 - 18, The current
orchestration sets $OutputRoot to TestResults/research/$RunId which violates
per-run isolation; change the Join-Path call that assigns $OutputRoot (when
[string]::IsNullOrWhiteSpace($OutputRoot)) to use the mandated structure
"TestResults/runs/<RunId>" (e.g., Join-Path $repoRoot "TestResults/runs/$RunId")
so all downstream artifacts (fingerprint.json, signature-pack.json,
analysis-notes.md) produced via $OutputRoot are placed under the per-run
isolated directory; ensure $RunId is used exactly and no other references to
"research" remain in this assignment.
tools/research/generate-signature-candidates.ps1-29-32 (1)

29-32: ⚠️ Potential issue | 🟠 Major

Default output path TestResults/research/<RunId>/ violates the per-run isolation mandate.

The coding guideline requires all per-run output under TestResults/runs/<runId>/. This script (and run-capability-intel.ps1) write to a parallel TestResults/research/ subtree, meaning downstream consumers that expect artifacts under the canonical path won't find them.

🐛 Proposed fix
 if ([string]::IsNullOrWhiteSpace($OutputPath)) {
-    $outputDir = Join-Path $repoRoot "TestResults/research/$RunId"
+    $outputDir = Join-Path $repoRoot "TestResults/runs/$RunId/research"
     New-Item -ItemType Directory -Path $outputDir -Force | Out-Null
     $OutputPath = Join-Path $outputDir "signature-pack.json"
 } else {

As per coding guidelines: "Per-run output isolation must be maintained under TestResults/runs/<runId>/ directory structure".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/generate-signature-candidates.ps1` around lines 29 - 32, The
script currently builds a default per-run output directory using
TestResults/research/$RunId which violates the per-run isolation mandate; update
the path construction so $outputDir uses "TestResults/runs/$RunId" instead of
"TestResults/research/$RunId" (i.e., change the Join-Path call that combines
$repoRoot and "TestResults/research/$RunId" to use "TestResults/runs/$RunId")
and leave the subsequent New-Item and $OutputPath assignment (Join-Path for
"signature-pack.json") unchanged so artifacts are created under the canonical
per-run tree; ensure references to $OutputPath/$outputDir in this script (and
mirror the same change in run-capability-intel.ps1) are consistent.

Comment on lines +8 to +21
private static readonly IReadOnlyDictionary<string, SdkOperationDefinition> Operations =
new Dictionary<string, SdkOperationDefinition>(StringComparer.OrdinalIgnoreCase)
{
["list_selected"] = SdkOperationDefinition.ReadOnly("list_selected"),
["list_nearby"] = SdkOperationDefinition.ReadOnly("list_nearby"),
["spawn"] = SdkOperationDefinition.Mutation("spawn", RuntimeMode.Tactical, RuntimeMode.Galactic),
["kill"] = SdkOperationDefinition.Mutation("kill", RuntimeMode.Tactical),
["set_owner"] = SdkOperationDefinition.Mutation("set_owner", RuntimeMode.Tactical, RuntimeMode.Galactic),
["teleport"] = SdkOperationDefinition.Mutation("teleport", RuntimeMode.Tactical),
["set_planet_owner"] = SdkOperationDefinition.Mutation("set_planet_owner", RuntimeMode.Galactic),
["set_hp"] = SdkOperationDefinition.Mutation("set_hp", RuntimeMode.Tactical),
["set_shield"] = SdkOperationDefinition.Mutation("set_shield", RuntimeMode.Tactical),
["set_cooldown"] = SdkOperationDefinition.Mutation("set_cooldown", RuntimeMode.Tactical)
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's find the SdkOperationRouter implementation
echo "=== Searching for SdkOperationRouter ==="
rg -n --type=cs 'SdkOperationRouter' | head -20

echo -e "\n=== Searching for set_credits usage ==="
rg -n --type=cs 'set_credits' -C 3

echo -e "\n=== Searching for TryGet calls on catalog ==="
rg -n --type=cs 'TryGet\s*\(' -C 3 | head -50

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 36824


🏁 Script executed:

#!/bin/bash
# Let's find the file containing SdkOperationRouter
echo "=== Finding SdkOperationRouter class ==="
fd --type=f --name='*.cs' | xargs grep -l 'class SdkOperationRouter'

echo -e "\n=== Finding files related to Sdk operations ==="
fd --type=f --name='*.cs' | xargs grep -l 'SdkOperationCatalog\|SdkOperationRouter' | head -10

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 677


🏁 Script executed:

#!/bin/bash
echo "=== SdkOperationRouter.cs full context ==="
cat -n src/SwfocTrainer.Core/Services/SdkOperationRouter.cs | head -100

echo -e "\n=== Check if set_credits bypasses SdkOperationRouter ==="
rg -n --type=cs 'set_credits' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -A 5 -B 5 | head -60

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 6009


🏁 Script executed:

#!/bin/bash
echo "=== Check ExecuteAsync path in RuntimeAdapter ==="
rg -n --type=cs 'ExecuteAsync' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs | head -20

echo -e "\n=== Check how SDK operations are routed ==="
rg -n --type=cs 'ExecutionKind.Sdk' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -B 3 -A 3 | head -50

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 1105


🏁 Script executed:

#!/bin/bash
echo "=== Check if ExecuteSdkActionAsync has special handling for set_credits ==="
rg -n --type=cs 'ExecuteSdkActionAsync' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -A 30

echo -e "\n=== Verify the test expectations for set_credits with Sdk ==="
rg -n --type=cs 'ExecutionKind.Sdk' tests/ | grep -i 'set_credits'

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 3545


🏁 Script executed:

#!/bin/bash
echo "=== Full ExecuteSdkActionAsync implementation ==="
rg -n --type=cs 'private async Task<ActionExecutionResult> ExecuteSdkActionAsync' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -A 50

echo -e "\n=== Check if there is an AllowUnknownMode property in SdkOperationDefinition ==="
rg -n --type=cs 'class SdkOperationDefinition\|record SdkOperationDefinition' src/SwfocTrainer.Core/Models/ -A 20

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 2262


🏁 Script executed:

#!/bin/bash
echo "=== Find SdkOperationDefinition ==="
find . -name "*.cs" -type f | xargs grep -l "SdkOperationDefinition" | head -5

echo -e "\n=== Search for SdkOperationDefinition definition ==="
rg -n --type=cs 'record SdkOperationDefinition|class SdkOperationDefinition' -A 15

echo -e "\n=== Check IsMutatingSdkOperation for set_credits ==="
rg -n --type=cs 'IsMutatingSdkOperation' src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs -A 10

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 2594


set_credits is missing from the SDK operation catalog and will fail at runtime.

The execution path for set_credits with ExecutionKind.Sdk routes through SdkOperationRouter.ExecuteAsync, which calls SdkOperationCatalog.TryGet("set_credits") at line 60. Since set_credits is absent from the catalog (lines 8–21), this lookup returns false, causing the router to immediately reject the operation with "not part of the v1 catalog." No fallback or bypass exists.

Add set_credits to the catalog as a mutation operation for RuntimeMode.Galactic:

Suggested catalog entry
["set_credits"] = SdkOperationDefinition.Mutation("set_credits", RuntimeMode.Galactic)

This is required for the ROE profile's SDK-routed execution to function, as confirmed by ProfileActionCatalogTests and LiveRoeRuntimeHealthTests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Core/Models/SdkOperationCatalog.cs` around lines 8 - 21, The
Operations dictionary in SdkOperationCatalog is missing the "set_credits" entry
which causes SdkOperationRouter.ExecuteAsync to fail when it calls
TryGet("set_credits"); add an entry to the static Operations dictionary using
SdkOperationDefinition.Mutation for "set_credits" with RuntimeMode.Galactic
(i.e., mirror the other entries like "set_planet_owner" but using the key
"set_credits" and Mutation("set_credits", RuntimeMode.Galactic)) so TryGet
succeeds and the SDK-routed execution path can proceed.

$snapshot = New-Object System.Collections.Generic.List[object]
foreach ($proc in $processes) {
$steamIds = Parse-SteamModIds -CommandLine $proc.CommandLine
$hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\\.exe|sweaw\\.exe)$") { "launcher" } else { "unknown" }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: double-backslash regex never matches swfoc.exe / sweaw.exe — all non-StarWarsG processes get hostRole = "unknown".

In PowerShell, backslash is not a string escape character, so "^(swfoc\\.exe|sweaw\\.exe)$" produces the literal regex ^(swfoc\\.exe|sweaw\\.exe)$. In .NET regex, \\ matches a literal backslash character, so this pattern requires a \ in the process name (e.g., swfoc\.exe). Actual process names like swfoc.exe will never match, causing the elseif branch to be dead code and those processes to always get "unknown" as hostRole.

Compare with line 56 which correctly uses single backslash: "^(swfoc\.exe|StarWarsG\.exe|sweaw\.exe)$".

🐛 Fix: use single backslash to escape the dot in regex
-        $hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\\.exe|sweaw\\.exe)$") { "launcher" } else { "unknown" }
+        $hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\.exe|sweaw\.exe)$") { "launcher" } else { "unknown" }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\\.exe|sweaw\\.exe)$") { "launcher" } else { "unknown" }
$hostRole = if ($proc.Name -ieq "StarWarsG.exe") { "game_host" } elseif ($proc.Name -match "^(swfoc\.exe|sweaw\.exe)$") { "launcher" } else { "unknown" }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/collect-mod-repro-bundle.ps1` at line 62, The regex in the $hostRole
assignment uses double backslashes ("^(swfoc\\.exe|sweaw\\.exe)$") so it looks
for a literal backslash before ".exe" and never matches real process names;
update the pattern used with $proc.Name (or replace the whole elseif condition)
to escape the dot correctly (e.g., use "\." or "[.]" in the regex) so names like
"swfoc.exe" and "sweaw.exe" match and the elseif branch can set hostRole to
"launcher".

Prekzursil and others added 2 commits February 22, 2026 13:17
- complete overload, complexity, markdown, and PowerShell cleanup for PR49 blockers
- align repro bundle tooling/schema to v1.1 and update fixture/contracts
- sanitize committed live evidence artifacts and add remediation checklist traceability

Co-authored-by: Codex <noreply@openai.com>
- replace parameter-heavy diagnostics helper signatures in BackendRouter and CapabilityMapResolver
- initialize capability map DTO properties to satisfy Sonar unassigned-property checks

Co-authored-by: Codex <noreply@openai.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1f6d32687d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

bool isMutating,
Dictionary<string, object?> diagnostics)
{
if (capabilityReport.IsFeatureAvailable(request.Action.Id))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Honor capability confidence before allowing mutating extender routes

This branch allows extender routing whenever IsFeatureAvailable is true, but mutating operations can still be in an unproven state (Experimental) at probe time (for example, the new economy probe advertises available=true before hook installation is verified). In that scenario writes like set_credits are permitted instead of fail-closing on uncertain capability confidence, which weakens the safety gate for mutation paths.

Useful? React with 👍 / 👎.

[ordered]@{
backend = "extender"
probeReasonCode = if ([string]::IsNullOrWhiteSpace([string]$runtimeEvidence.result.capabilityProbeReasonCode)) { "CAPABILITY_UNKNOWN" } else { [string]$runtimeEvidence.result.capabilityProbeReasonCode }
capabilityCount = 1

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Populate capabilityCount from evidence instead of constant

When runtime evidence is present, the repro bundle always records capabilityCount = 1 instead of deriving the value from the probe output. Any run where the probe returns zero capabilities (or more than one) will emit incorrect diagnostics, which can skew hard-gate triage and make the attached evidence bundle factually inconsistent with runtime behavior.

Useful? React with 👍 / 👎.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs (1)

341-378: ⚠️ Potential issue | 🟡 Minor

Add a deterministic tie-breaker for process selection.

When scores and tie-breakers are equal, ordering falls back to enumeration order, which can vary and lead to non-deterministic attach targets. Add a final stable tie-breaker (e.g., ProcessId) so selection is deterministic.

🔧 Suggested fix
            .OrderByDescending(x => x.SelectionScore)
            .ThenByDescending(x => x.WorkshopMatchCount)
            .ThenByDescending(x => x.RecommendationMatch)
            .ThenByDescending(x => x.MainModuleSize)
            .ThenByDescending(x => x.HasCommandLine)
+           .ThenByDescending(x => x.Process.ProcessId)
            .ToArray();
As per coding guidelines, "Ensure attach host selection is deterministic when both swfoc.exe and StarWarsG.exe exist for FoC".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Runtime/Services/RuntimeAdapter.cs` around lines 341 - 378,
The selection ordering for ranked candidates can be non-deterministic when
scores and all tie-breakers match; update the final ordering chain after
SelectionScore to include a stable tie-breaker using the process identifier
(e.g., x.Process.ProcessId) so results are deterministic. Modify the LINQ sort
after OrderByDescending(x => x.SelectionScore) / ThenByDescending chains to
append a final ThenBy (or ThenByDescending as appropriate) on
x.Process.ProcessId (use a consistent string comparison like Ordinal or
OrdinalIgnoreCase) so identical scores always resolve in a stable order.
tests/SwfocTrainer.Tests/Profiles/LiveRoeRuntimeHealthTests.cs (1)

26-68: ⚠️ Potential issue | 🟡 Minor

Ensure live evidence is produced or explicitly skip when output dir is unset.

Right now the test can run without emitting evidence when SWFOC_LIVE_OUTPUT_DIR is missing, which makes the live run non-deterministic and unverifiable. Consider skipping when the output directory is not configured, or writing to a default deterministic location.

🔧 Suggested fix (skip when output dir is missing)
        var roeCandidates = await FindRoeCandidatesAsync(locator);
        EnsureRoeCandidatesOrSkip(roeCandidates);
+       if (!TryGetLiveOutputDirectory(out _))
+       {
+           throw LiveSkip.For(_output, "SWFOC_LIVE_OUTPUT_DIR is not set; live evidence output required.");
+       }
As per coding guidelines, "Runtime/tooling/test changes must include deterministic test evidence or an explicit justified skip".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/SwfocTrainer.Tests/Profiles/LiveRoeRuntimeHealthTests.cs` around lines
26 - 68, The test Roe_Attach_And_Credits_Action_Should_Succeed_On_Live_Process
can run without producing evidence; before doing any live actions check the
configured output directory (e.g.
Environment.GetEnvironmentVariable("SWFOC_LIVE_OUTPUT_DIR") or whatever config
the test suite uses) and explicitly skip the test if it's unset/empty (use
Skip.If or the existing Ensure... helper pattern), or alternatively set a
deterministic default output path; ensure TryWriteRuntimeEvidence is only called
when the output dir is present so evidence is always produced or the test is
skipped.
src/SwfocTrainer.App/ViewModels/MainViewModel.cs (1)

671-688: ⚠️ Potential issue | 🟡 Minor

Status message misleads when universal_auto overrides the recommendation.

When universal_auto is present (lines 677–681), SelectedProfileId is set to universal_auto, but the Status at lines 685–686 still announces recommended: {recommended} (e.g. aotr_xyz). A user reading the status will think the named mod profile is selected, but universal_auto is actually active. Either omit the recommendation hint when universal_auto wins, or surface both:

💡 Suggested status update
  Status = !string.IsNullOrWhiteSpace(recommended)
-     ? $"Loaded {Profiles.Count} profiles (recommended: {recommended})"
+     ? Profiles.Contains(UniversalProfileId)
+         ? $"Loaded {Profiles.Count} profiles (universal_auto selected; detected: {recommended})"
+         : $"Loaded {Profiles.Count} profiles (recommended: {recommended})"
      : $"Loaded {Profiles.Count} profiles";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.App/ViewModels/MainViewModel.cs` around lines 671 - 688, The
status string can be misleading because when UniversalProfileId wins
(Profiles.Contains(UniversalProfileId)) SelectedProfileId is set to
UniversalProfileId but Status still shows the original recommended value; update
the logic that sets Status (after SelectedProfileId is resolved) to reflect the
actual chosen profile—either omit the "recommended: {recommended}" part when
SelectedProfileId == UniversalProfileId, or include both pieces (e.g.,
"recommended: {recommended} (overridden by universal_auto)") so the UI
accurately reflects SelectedProfileId, using the existing symbols Profiles,
recommended, UniversalProfileId, SelectedProfileId and Status to locate and
change the code.
🟡 Minor comments (20)
TestResults/runs/20260218-213838/issue-19-evidence-template.md-4-4 (1)

4-4: ⚠️ Potential issue | 🟡 Minor

Scope label is narrower than the table coverage, and the TRX artifacts are missing from the Artifacts section.

Two distinct points:

  1. Line 4 declares scope: ROE, but the evidence table on lines 9-10 covers both the aotr_1397421866_swfoc and roe_3447786229_swfoc profiles. The scope should reflect both, e.g. scope: AOTR + ROE.

  2. The AI-generated summary notes that run 20260218-213838 produced four TRX artifacts (live-credits.trx, live-hero-helper.trx, live-roe-health.trx, live-tactical.trx) and live-roe-runtime-evidence.json, none of which are listed in the Artifacts section. Per the project's evidence conventions, the TRX files are the primary machine-readable test output and should be referenced here for traceability.

📝 Suggested additions
-  - scope: ROE
+  - scope: AOTR + ROE

   Artifacts:

   - TestResults\runs\20260218-213838\live-validation-summary.json
   - TestResults\runs\20260218-213838\launch-context-fixture.json
   - TestResults\runs\20260218-213838\repro-bundle.json
   - TestResults\runs\20260218-213838\repro-bundle.md
+  - TestResults\runs\20260218-213838\live-tactical.trx
+  - TestResults\runs\20260218-213838\live-hero-helper.trx
+  - TestResults\runs\20260218-213838\live-roe-health.trx
+  - TestResults\runs\20260218-213838\live-credits.trx
+  - TestResults\runs\20260218-213838\live-roe-runtime-evidence.json

Also applies to: 19-24

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TestResults/runs/20260218-213838/issue-19-evidence-template.md` at line 4,
Update the front-matter and Artifacts section: change the scope value currently
set as "scope: ROE" to include both profiles (e.g., "scope: AOTR + ROE") because
the evidence table references both aotr_1397421866_swfoc and
roe_3447786229_swfoc, and add the missing machine-readable outputs to the
Artifacts list by including live-credits.trx, live-hero-helper.trx,
live-roe-health.trx, live-tactical.trx and live-roe-runtime-evidence.json so the
TRX files and the JSON runtime evidence are referenced for traceability.
TestResults/runs/20260218-213838/issue-19-evidence-template.md-14-17 (1)

14-17: ⚠️ Potential issue | 🟡 Minor

Empty parentheses () after each test suite name appear to be an unfilled token.

Lines 14-17 list all four suites as Passed (). The trailing () is likely a placeholder for a test count or elapsed-time value that was never substituted. If these were autogenerated, verify that the template interpolation ran correctly; otherwise fill in the actual counts or remove the empty parens.

📝 Suggested fix
-  - LiveTacticalToggleWorkflowTests: Passed ()
-  - LiveHeroHelperWorkflowTests: Passed ()
-  - LiveRoeRuntimeHealthTests: Passed ()
-  - LiveCreditsTests: Passed ()
+  - LiveTacticalToggleWorkflowTests: Passed (3/3)
+  - LiveHeroHelperWorkflowTests: Passed (2/2)
+  - LiveRoeRuntimeHealthTests: Passed (4/4)
+  - LiveCreditsTests: Passed (1/1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TestResults/runs/20260218-213838/issue-19-evidence-template.md` around lines
14 - 17, The four test-suite lines show an unfilled token "()" — update the
evidence-generation/template logic so those placeholders are either populated
with the correct value or removed; specifically, fix the template or
interpolation that renders the entries for LiveTacticalToggleWorkflowTests,
LiveHeroHelperWorkflowTests, LiveRoeRuntimeHealthTests and LiveCreditsTests so
they output a test count or elapsed time (e.g., "Passed (N)" or "Passed (0s)")
or simply "Passed" when no value is available. Locate the rendering code that
emits those suite lines and either supply the missing variable or change the
format string to omit the parentheses when the value is empty.
native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp-151-172 (1)

151-172: ⚠️ Potential issue | 🟡 Minor

Message-mode pipe: large messages (>16 KB) will be silently truncated.

With PIPE_READMODE_MESSAGE, if the message exceeds kPipeBufferSize, ReadFile returns FALSE with ERROR_MORE_DATA. The while loop exits, and the incomplete payload is processed. Given the expected command sizes this is unlikely to trigger, but consider checking GetLastError() == ERROR_MORE_DATA and either reading the remainder or returning an explicit error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 151 -
172, In ReadCommandLine, handle the message-mode pipe truncation case: after a
ReadFile that returns FALSE, call GetLastError() and if it equals
ERROR_MORE_DATA, either loop to read remaining chunks until the full message is
consumed (appending to commandLine) or fail-fast by returning an explicit
error/empty string to signal truncation; update logic around the ReadFile loop
and refer to the ReadCommandLine function and kPipeBufferSize buffer to ensure
you handle successive ReadFile calls when ERROR_MORE_DATA is reported rather
than silently processing a partial command.
native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp-31-86 (1)

31-86: ⚠️ Potential issue | 🟡 Minor

Hand-rolled JSON extractors don't handle escaped characters or braces inside string values.

ExtractStringValue scans for the next " after the opening quote, so a value containing an escaped quote (\") would be truncated. Similarly, ExtractObjectJson counts {/} without skipping those inside string literals, so a string value containing a brace would corrupt the extraction.

This is acceptable if the command payloads are entirely controlled and guaranteed to never contain such characters, but it will silently produce incorrect results for any unexpected input. Consider documenting this limitation, or switching to a lightweight JSON library (e.g., nlohmann/json or simdjson) if payload complexity grows.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 31 -
86, Replace the fragile hand-rolled parsers (ExtractStringValue and
ExtractObjectJson) with a real JSON parse-and-extract using a JSON library
(e.g., nlohmann::json): parse the incoming json string into a json object once
inside both functions (or a shared helper), then safely lookup the key and
return either the string value for ExtractStringValue or the object dump for
ExtractObjectJson; wrap parsing/lookup in try/catch and return {} or "{}" on
failure to preserve current behavior. Ensure you reference and update the
functions named ExtractStringValue and ExtractObjectJson and add the required
include and error-handling so escaped quotes and braces inside string values are
handled correctly.
native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp-14-18 (1)

14-18: ⚠️ Potential issue | 🟡 Minor

CapabilityState::Experimental and ::Verified are unreachable via the public API.

CapabilityEntry.state is typed as CapabilityState with three meaningful values (Unknown, Experimental, Verified), yet markAvailable takes no CapabilityState parameter. The implementation must therefore hard-code the resulting state (likely Verified), leaving Experimental permanently inaccessible, or inadvertently leave state as Unknown on an available = true entry — which would be a logic contradiction.

Either:

  • Add a CapabilityState parameter to markAvailable, or
  • Add a dedicated markExperimental(const std::string& featureId, ...) overload.
♻️ Option A — extend markAvailable signature
-    void markAvailable(const std::string& featureId, const std::string& reasonCode = "CAPABILITY_PROBE_PASS");
+    void markAvailable(const std::string& featureId,
+                       CapabilityState   state      = CapabilityState::Verified,
+                       const std::string& reasonCode = "CAPABILITY_PROBE_PASS");

Also applies to: 30-30

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp`
around lines 14 - 18, CapabilityState defines Unknown, Experimental, and
Verified but the public API only exposes markAvailable with no state parameter,
leaving Experimental unreachable or causing inconsistency between
CapabilityEntry.state and available; update the API to either add a
CapabilityState parameter to markAvailable (e.g., markAvailable(const
std::string& featureId, CapabilityState state, ...)) and propagate that value
into CapabilityEntry.state, or add a separate markExperimental(const
std::string& featureId, ...) overload that sets CapabilityEntry.state =
CapabilityState::Experimental, ensuring callers can explicitly choose
Experimental vs Verified and that markAvailable retains/sets the correct
Verified behavior; adjust all call sites and any validation logic that assumes
state values accordingly (referencing CapabilityState, CapabilityEntry.state,
markAvailable, and markExperimental).
tools/collect-mod-repro-bundle.ps1-513-518 (1)

513-518: ⚠️ Potential issue | 🟡 Minor

$overlayState is a hardcoded stub — values never change even when $runtimeEvidence is present.

available and visible are always $false and reasonCode is always "CAPABILITY_BACKEND_UNAVAILABLE", even in the branch where $runtimeEvidence is non-null (the source field is the only dynamic part). This means the repro bundle and its Markdown summary always report the overlay as unavailable, masking any live overlay data that may be in the evidence.

If this is intentional scaffolding for a future capability, add an inline comment (e.g., # TODO: populate from runtimeEvidence.result.overlayState once exposed) so readers know it is a known placeholder rather than a silent data gap.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/collect-mod-repro-bundle.ps1` around lines 513 - 518, The $overlayState
object is a hardcoded stub so available/visible/reasonCode never reflect runtime
data; update the assignment to populate these fields from runtimeEvidence (e.g.,
use runtimeEvidence.result.overlayState.available, .visible, .reasonCode when
$runtimeEvidence is non-null) and fall back to the existing defaults otherwise,
or if this is intentional scaffolding add an explicit inline TODO comment (e.g.,
"# TODO: populate from runtimeEvidence.result.overlayState once exposed") next
to the $overlayState declaration so readers know it is a placeholder; refer to
$overlayState and $runtimeEvidence to locate and change the logic.
tools/collect-mod-repro-bundle.ps1-477-494 (1)

477-494: ⚠️ Potential issue | 🟡 Minor

capabilityCount = 1 is hardcoded regardless of actual probed capabilities.

When $runtimeEvidence is present, capabilityCount is fixed at 1 rather than derived from evidence data or from the actual size of $requiredCapabilities. If the profile ever lists zero or more than one required capability, the bundle will misreport the probe count, making the snapshot misleading for diagnostics.

♻️ Proposed fix
-        capabilityCount = 1
+        capabilityCount = @($requiredCapabilities).Count
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/collect-mod-repro-bundle.ps1` around lines 477 - 494, The
capabilityCount field in capabilityProbeSnapshot is hardcoded to 1; change it to
compute the real count instead—when $runtimeEvidence is present set
capabilityCount = ($requiredCapabilities | Measure-Object).Count (or if
runtimeEvidence contains a capabilities array use that array's length), and keep
the else branch as 0; update the capabilityProbeSnapshot construction (the block
that defines
backend/probeReasonCode/capabilityCount/requiredCapabilities/source) to use this
computed count so the snapshot reflects actual required/probed capability
numbers.
tools/run-live-validation.ps1-445-456 (1)

445-456: ⚠️ Potential issue | 🟡 Minor

Prior $fatalError (from repro-bundle emission) is silently overwritten by the hard gate

When collect-mod-repro-bundle.ps1 or validate-repro-bundle.ps1 fails, line 441 sets $fatalError and $bundleClassification remains "". The hard gate then enters the elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) branch and replaces $fatalError with a generic "Hard gate could not determine repro-bundle classification." message. The original failure reason (e.g., "collect-mod-repro-bundle.ps1 failed with exit code 1") is lost from the thrown exception at line 567. The Write-Warning at line 440 still prints it, but that context is gone by the time a CI system surfaces the error.

🛡️ Proposed fix — skip hard gate when a prior fatal error exists
 if ($RequireNonBlockedClassification) {
     if (-not $EmitReproBundle) {
         $fatalError = [InvalidOperationException]::new("-RequireNonBlockedClassification requires repro-bundle emission.")
-    }
-    elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) {
+    }
+    elseif ($null -ne $fatalError) {
+        # A prior fatal error already explains why classification is absent; preserve the original.
+    }
+    elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) {
         $fatalError = [InvalidOperationException]::new("Hard gate could not determine repro-bundle classification.")
     }
     elseif ($bundleClassification -in @("blocked_environment", "blocked_profile_mismatch")) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/run-live-validation.ps1` around lines 445 - 456, The hard-gate logic
(the if ($RequireNonBlockedClassification) block) overwrites a previously-set
$fatalError from repro-bundle steps; guard this block so it only runs when no
prior fatal error exists. Modify the check to skip the entire hard-gate branch
if $fatalError is already set (e.g., wrap the current block in an outer if (-not
$fatalError) or equivalent), referencing the existing symbols
$RequireNonBlockedClassification, $EmitReproBundle, $bundleClassification and
$fatalError so the original exception from collect-mod-repro-bundle.ps1 /
validate-repro-bundle.ps1 is preserved.
src/SwfocTrainer.Core/Contracts/IBinaryFingerprintService.cs-10-17 (1)

10-17: ⚠️ Potential issue | 🟡 Minor

Add XML doc comments to all four CaptureFromPathAsync overloads.

The build check flagged all four public members for missing XML documentation. The interface-level <summary> exists but per-member docs are absent, which is inconsistent with the documentation requirement enforced by the build pipeline.

📝 Suggested XML comments
+    /// <inheritdoc cref="CaptureFromPathAsync(string, int, CancellationToken)"/>
     Task<BinaryFingerprint> CaptureFromPathAsync(string modulePath);

+    /// <inheritdoc cref="CaptureFromPathAsync(string, int, CancellationToken)"/>
     Task<BinaryFingerprint> CaptureFromPathAsync(string modulePath, CancellationToken cancellationToken);

+    /// <inheritdoc cref="CaptureFromPathAsync(string, int, CancellationToken)"/>
     Task<BinaryFingerprint> CaptureFromPathAsync(string modulePath, int processId);

+    /// <summary>
+    /// Captures a deterministic binary fingerprint from <paramref name="modulePath"/>
+    /// using the specified <paramref name="processId"/> for live-process context.
+    /// </summary>
     Task<BinaryFingerprint> CaptureFromPathAsync(string modulePath, int processId, CancellationToken cancellationToken);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Core/Contracts/IBinaryFingerprintService.cs` around lines 10
- 17, Add XML documentation to each of the four CaptureFromPathAsync overloads
in IBinaryFingerprintService: provide a <summary> describing the purpose
(capture a BinaryFingerprint from a module path), add <param> tags for
modulePath and where applicable processId and cancellationToken (describe
expected values and behavior), and include a <returns> describing the returned
Task<BinaryFingerprint>. Ensure each overload (CaptureFromPathAsync(string),
CaptureFromPathAsync(string, CancellationToken), CaptureFromPathAsync(string,
int), CaptureFromPathAsync(string, int, CancellationToken)) has its own specific
XML doc so the public members satisfy the build documentation rules.
src/SwfocTrainer.Core/Contracts/ICapabilityMapResolver.cs-10-21 (1)

10-21: ⚠️ Potential issue | 🟡 Minor

Add XML doc comments to the ResolveAsync overloads.

The build check flagged lines 10 and 16 for missing XML documentation on the public ResolveAsync overloads.

📝 Suggested XML comments
+    /// <inheritdoc cref="ResolveAsync(BinaryFingerprint, string, string, IReadOnlySet{string}, CancellationToken)"/>
     Task<CapabilityResolutionResult> ResolveAsync(
         BinaryFingerprint fingerprint,
         string requestedProfileId,
         string operationId,
         IReadOnlySet<string> resolvedAnchors);

+    /// <summary>
+    /// Resolves operation capability state from fingerprint-mapped anchor requirements.
+    /// </summary>
     Task<CapabilityResolutionResult> ResolveAsync(
         BinaryFingerprint fingerprint,
         string requestedProfileId,
         string operationId,
         IReadOnlySet<string> resolvedAnchors,
         CancellationToken cancellationToken);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Core/Contracts/ICapabilityMapResolver.cs` around lines 10 -
21, Add XML doc comments to both ResolveAsync overloads in
ICapabilityMapResolver: document the method purpose, the returned
Task<CapabilityResolutionResult>, and each parameter (fingerprint of type
BinaryFingerprint, requestedProfileId, operationId, resolvedAnchors) and for the
second overload include the CancellationToken parameter; make sure the summary
and param/returns tags are present and match the method signatures so the public
API has proper XML documentation.
src/SwfocTrainer.Core/Services/SdkOperationRouter.cs-260-263 (1)

260-263: ⚠️ Potential issue | 🟡 Minor

GetValue<string>() throws InvalidOperationException for non-string JsonNode elements.

If a caller populates resolvedAnchors as a JsonArray containing numeric, boolean, or nested object nodes, node?.GetValue<string>() throws, and the exception propagates out of ExtractResolvedAnchors through ExecuteAsync without a diagnostics sink write.

🛡️ Proposed fix
-return CreateAnchorSet(jsonArray.Select(node => node?.GetValue<string>()));
+return CreateAnchorSet(jsonArray.Select(node =>
+    node is JsonValue jv && jv.TryGetValue<string>(out var s) ? s : null));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Core/Services/SdkOperationRouter.cs` around lines 260 - 263,
The current JsonArray handling in ExtractResolvedAnchors uses
node?.GetValue<string>() which throws for non-string JsonNode elements; update
the Select to safely extract a string for each node by calling
node?.GetValue<object>() and converting that to a string via
Convert.ToString(...)/ToString() (preserving nulls) or falling back to
node?.ToString() for complex nodes, then feed that IEnumerable<string?> into
CreateAnchorSet so numeric, boolean and object nodes no longer throw; ensure
this change is applied in the JsonArray branch around CreateAnchorSet so
ExecuteAsync no longer propagates InvalidOperationException from
GetValue<string>().
tools/native/resolve-cmake.ps1-52-57 (1)

52-57: ⚠️ Potential issue | 🟡 Minor

"Visual Studio 18 2026" generator requires CMake ≥ 4.2 — the result DTO doesn't communicate this constraint.

The Visual Studio 18 2026 generator was added in CMake 4.2. The lowest CMake version that supports this generator is CMake 4.2. The result object returned by this script carries recommendedGenerator but provides no minCmakeVersion hint. The caller (build-native.ps1) cannot distinguish a version-incompatible generator from a valid one without parsing cmake --version separately. Consider adding a minCmakeVersion field alongside recommendedGenerator so callers can fail fast with a clear message.

♻️ Add minCmakeVersion to the generator map
     switch ($major) {
-        18 { return "Visual Studio 18 2026" }
-        17 { return "Visual Studio 17 2022" }
-        16 { return "Visual Studio 16 2019" }
-        default { return $null }
+        18 { return [PSCustomObject]@{ Name = "Visual Studio 18 2026"; MinCmakeVersion = "4.2" } }
+        17 { return [PSCustomObject]@{ Name = "Visual Studio 17 2022"; MinCmakeVersion = "3.14" } }
+        16 { return [PSCustomObject]@{ Name = "Visual Studio 16 2019"; MinCmakeVersion = "3.14" } }
+        default { return $null }
     }

Then propagate minCmakeVersion through the $result object so build-native.ps1 can gate on it before invoking CMake.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/native/resolve-cmake.ps1` around lines 52 - 57, Update the result DTO
from resolve-cmake.ps1 to include a minCmakeVersion alongside
recommendedGenerator so callers can check compatibility; modify the
switch($major) mapping that returns "Visual Studio 18 2026"/"Visual Studio 17
2022"/"Visual Studio 16 2019" to instead return an object/hashtable containing
both generator and minCmakeVersion (e.g., for major 18 set minCmakeVersion
"4.2"), then ensure the script populates $result.minCmakeVersion in addition to
$result.recommendedGenerator, and update build-native.ps1 to read
$result.minCmakeVersion before invoking cmake to fail fast on incompatible CMake
versions.
tools/native/resolve-cmake.ps1-75-77 (1)

75-77: ⚠️ Potential issue | 🟡 Minor

Disk-based fallback regex ^\d{2}$ never matches real VS installation directories.

Visual Studio installs under 4-digit year subdirectories (2019, 2022, 2026), not 2-digit version numbers. The ^\d{2}$ filter always yields an empty $majorDirs collection, making $instances permanently empty and causing Resolve-VisualStudioMetadataFromDisk to always return null metadata. The fallback is fully inoperative for every standard VS layout.

Practical impact is low (vswhere is always present when VS is installed), but the fallback is dead code on any real machine.

🐛 Fix the year-directory regex and version extraction
             $majorDirs = Get-ChildItem -Path $root -Directory -ErrorAction SilentlyContinue |
-                Where-Object { $_.Name -match '^\d{2}$' } |
+                Where-Object { $_.Name -match '^\d{4}$' } |
                 Sort-Object Name -Descending

Additionally update the productLineVersion derivation so it feeds the correct major number into Resolve-RecommendedGenerator (which expects 16/17/18, not a 4-digit year):

                     $instances.Add([PSCustomObject]@{
                         instancePath = $edition.FullName
-                        productLineVersion = $majorDir.Name
+                        productLineVersion = switch ([int]$majorDir.Name) {
+                            2026 { "18" }
+                            2022 { "17" }
+                            2019 { "16" }
+                            default { $majorDir.Name }
+                        }
                     })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/native/resolve-cmake.ps1` around lines 75 - 77, The disk-fallback
currently filters directories with '^\d{2}$' so $majorDirs is empty; change the
regex to match 4-digit year folders (e.g. '^\d{4}$') and update the
productLineVersion derivation so Resolve-RecommendedGenerator receives the VS
major number (map year -> major, e.g. 2019 -> 16, 2022 -> 17, 2026 -> 18) when
building $productLineVersion used by Resolve-RecommendedGenerator in
Resolve-VisualStudioMetadataFromDisk so $instances is populated correctly.
tools/native/build-native.ps1-79-82 (1)

79-82: ⚠️ Potential issue | 🟡 Minor

CMake minimum version not validated before using the Visual Studio 18 2026 generator.

The lowest CMake version that supports the Visual Studio 18 2026 generator is CMake 4.2. The script runs cmake --version for a liveness check but never parses the version number. When VS 18 is detected by resolve-cmake.ps1 and a CMake < 4.2 binary is on PATH, the configure step fails and the script surfaces only the generic "native configure failed for Windows mode" message, giving no hint that the CMake version is the culprit. Add a minimum-version gate when the selected generator is "Visual Studio 18 2026".

🛡️ Proposed version gate in `Invoke-WindowsBuild`
     & $CmakePath --version
     if ((Get-LastExitCodeOrZero) -ne 0) {
         throw "cmake --version failed for '$CmakePath'."
     }
+
+    # VS 18 2026 generator requires CMake >= 4.2
+    if ($RecommendedGenerator -like "Visual Studio 18*") {
+        $verLine = (& $CmakePath --version 2>&1 | Select-String '[\d]+\.[\d]+\.[\d]+' | Select-Object -First 1)
+        if ($verLine -match '(\d+)\.(\d+)') {
+            $maj = [int]$Matches[1]; $min = [int]$Matches[2]
+            if ($maj -lt 4 -or ($maj -eq 4 -and $min -lt 2)) {
+                throw "[CMAKE_VERSION_INSUFFICIENT] Generator '$RecommendedGenerator' requires CMake >= 4.2 but found $maj.$min."
+            }
+        }
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/native/build-native.ps1` around lines 79 - 82, When the Windows
generator "Visual Studio 18 2026" is selected (resolved in resolve-cmake.ps1 and
used in Invoke-WindowsBuild), add a minimum CMake version check after invoking
$CmakePath --version: parse the semantic version from the output (or call cmake
--version and extract the major.minor.patch), compare it against 4.2.0 and throw
a clear error if the installed CMake is older; update the existing liveness
check that uses Get-LastExitCodeOrZero and the $CmakePath variable so that it
fails with a specific message like "CMake >= 4.2 required for Visual Studio 18
2026 generator" instead of the generic configure failure.
docs/pr49-remediation-checklist.md-64-73 (1)

64-73: ⚠️ Potential issue | 🟡 Minor

psscriptanalyzer_psavoidusingwritehost entries for the reviewed scripts appear stale.

normalize-signature-pack.ps1, capture-binary-fingerprint.ps1, extract-pe-metadata.ps1, and generate-signature-candidates.ps1 each use Write-Output, not Write-Host. These four entries (4 of the 20 total) may reflect a pre-remediation snapshot. Consider regenerating the checklist or noting which items are already resolved.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/pr49-remediation-checklist.md` around lines 64 - 73, The checklist entry
for the rule psscriptanalyzer_psavoidusingwritehost is stale: update the
remediation checklist to reflect that normalize-signature-pack.ps1,
capture-binary-fingerprint.ps1, extract-pe-metadata.ps1, and
generate-signature-candidates.ps1 use Write-Output (not Write-Host) by
regenerating or re-scanning the scripts and removing or marking these four files
as resolved; ensure the tools/native/*.ps1 and
tools/research/run-capability-intel.ps1 entries remain accurate while
regenerating the report so the listed files (tools/native/build-native.ps1,
tools/native/resolve-cmake.ps1, tools/research/run-capability-intel.ps1,
tools/run-live-validation.ps1) are only included if they truly violate the rule.
docs/EXTERNAL_TOOLS_SETUP.md-98-103 (1)

98-103: ⚠️ Potential issue | 🟡 Minor

Env var override example shows the default value — not useful.

Setting SWFOC_EXTENDER_PIPE_NAME to "SwfocExtenderBridge" (the default) doesn't illustrate an override scenario. Readers expecting a custom-value example will be confused.

📝 Suggested fix
-$env:SWFOC_EXTENDER_PIPE_NAME = "SwfocExtenderBridge"
+$env:SWFOC_EXTENDER_PIPE_NAME = "MyCustomExtenderPipe"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/EXTERNAL_TOOLS_SETUP.md` around lines 98 - 103, The example currently
sets SWFOC_EXTENDER_PIPE_NAME to the default "SwfocExtenderBridge", which is
misleading; update the docs (the section that mentions SwfocExtender.Host and
the named pipe SwfocExtenderBridge) to show a real override by replacing the
example value with a custom pipe name (e.g., MyCustomPipe) and add a short note
that this environment variable changes the named pipe the host listens on so
readers see how to override the default.
docs/LIVE_VALIDATION_RUNBOOK.md-74-80 (1)

74-80: ⚠️ Potential issue | 🟡 Minor

"Required for runtime-affecting changes" conflicts with the schema's unconditional requirement.

tools/schemas/repro-bundle.schema.json (lines 14–18) adds all five sections (selectedHostProcess, backendRouteDecision, capabilityProbeSnapshot, hookInstallReport, overlayState) to the top-level required array unconditionally — every bundle of any scope must include them. Labelling them "required for runtime-affecting changes" here implies they're optional for non-runtime changes, which is incorrect and will mislead consumers generating bundles.

Consider updating the heading to: "vNext bundle sections (required in all v1.1 bundles)" and removing the qualifier.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/LIVE_VALIDATION_RUNBOOK.md` around lines 74 - 80, Update the heading in
LIVE_VALIDATION_RUNBOOK.md to reflect that the five vNext bundle sections
(`selectedHostProcess`, `backendRouteDecision`, `capabilityProbeSnapshot`,
`hookInstallReport`, `overlayState`) are unconditionally required by the schema;
replace the phrase "required for runtime-affecting changes" with something like
"vNext bundle sections (required in all v1.1 bundles)" and remove the qualifier
that implies conditionality, and ensure the note references the schema's
top-level `required` array in tools/schemas/repro-bundle.schema.json so
consumers are not misled.
tools/fixtures/repro_bundle_sample.json-48-52 (1)

48-52: ⚠️ Potential issue | 🟡 Minor

hookInstallReport.state and reasonCode are semantically inconsistent.

state: "unknown" implies the hook install outcome is undetermined, while reasonCode: "HOOK_INSTALL_FAILED" asserts a definite failure. These contradict each other. A consistent sample would pair a known outcome with a matching reason code:

🔧 Option A — known failure state
   "hookInstallReport": {
-    "state": "unknown",
+    "state": "failed",
     "reasonCode": "HOOK_INSTALL_FAILED",
     "details": "Not captured in sample."
   },
🔧 Option B — genuinely unknown state
   "hookInstallReport": {
     "state": "unknown",
-    "reasonCode": "HOOK_INSTALL_FAILED",
+    "reasonCode": "HOOK_STATE_UNKNOWN",
     "details": "Not captured in sample."
   },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/fixtures/repro_bundle_sample.json` around lines 48 - 52, The
hookInstallReport object currently has mismatched semantics between the state
and reasonCode fields; update hookInstallReport so state and reasonCode are
consistent—for a known failure set "state" to "failed" (or "error") and keep
reasonCode "HOOK_INSTALL_FAILED" and a meaningful details string, or for an
unknown outcome keep "state" as "unknown" and change/remove reasonCode (e.g.,
use "UNKNOWN" or omit reasonCode) and update details accordingly; modify the
hookInstallReport.{state,reasonCode,details} values to follow one of these two
consistent options.
README.md-136-145 (1)

136-145: ⚠️ Potential issue | 🟡 Minor

Research tooling section needs an explicit Windows-only note.

All scripts in this section operate on Windows PE binaries (swfoc.exe) and require a live Windows game installation. Per project conventions, Windows target assumptions must be called out explicitly in documentation. Additionally, the output paths on lines 141–142 reference TestResults/research/<runId>/, which mirrors the per-run path deviation flagged in the script itself.

🔧 Suggested header update
-## R&D Research Tooling (Deep RE Track)
+## R&D Research Tooling (Deep RE Track — Windows only)

As per coding guidelines: "Windows live target assumptions must be kept explicit in documentation and script help."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 136 - 145, Add an explicit Windows-only note to the
"R&D Research Tooling (Deep RE Track)" section clarifying these scripts operate
on Windows PE binaries and require a live Windows game installation; update the
header and/or a short paragraph above the code block to state "Windows-only:
requires a live Windows installation and targets PE binaries (e.g., swfoc.exe)".
Also correct the output path guidance around generate-signature-candidates.ps1
and normalize-signature-pack.ps1 to avoid implying a static per-run location by
referencing the runId placeholder consistently (e.g.,
TestResults/research/<runId>/...) and mention that runId varies per execution.
Ensure the note references the exact script names shown
(run-capability-intel.ps1, capture-binary-fingerprint.ps1,
generate-signature-candidates.ps1, normalize-signature-pack.ps1) so readers know
which tools are Windows-only.
TestResults/runs/20260218-213838/repro-bundle.md-21-22 (1)

21-22: ⚠️ Potential issue | 🟡 Minor

Scripting artifact — .Replace('|','/') expression emitted verbatim, causing malformed table rows.

The markdown generator appended the PowerShell method-call text literally rather than executing it. The | characters inside the single-quoted argument '|' are parsed as column separators, producing a 7-cell row in a 6-column table (confirmed by markdownlint MD056). Most renderers will silently drop the extra cell.

The corresponding JSON (repro-bundle.json lines 11 and 25) shows neither command line actually contains a |, so the replacement was superfluous — the trailing expression can simply be stripped.

🔧 Suggested fix
-| 45908 | swfoc.exe | unknown | 2011.81 | 3447786229,1397421866 | "[REDACTED_GAME_PATH]\swfoc.exe" STEAMMOD=3447786229 STEAMMOD=1397421866 .Replace('|','/') |
+| 45908 | swfoc.exe | unknown | 2011.81 | 3447786229,1397421866 | "[REDACTED_GAME_PATH]\swfoc.exe" STEAMMOD=3447786229 STEAMMOD=1397421866 |
-| 32952 | StarWarsG.exe | game_host | 2223.37 | 3447786229,1397421866 | StarWarsG.exe STEAMMOD=3447786229 STEAMMOD=1397421866  NOARTPROCESS IGNOREASSERTS.Replace('|','/') |
+| 32952 | StarWarsG.exe | game_host | 2223.37 | 3447786229,1397421866 | StarWarsG.exe STEAMMOD=3447786229 STEAMMOD=1397421866  NOARTPROCESS IGNOREASSERTS |

If future command lines do contain |, escape them as \| in the generator instead of calling .Replace() on the rendered string.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TestResults/runs/20260218-213838/repro-bundle.md` around lines 21 - 22, The
markdown generator emitted the literal PowerShell call ".Replace('|','/')" into
repro-bundle.md, producing malformed table rows; remove the trailing
".Replace('|','/')" expression from the generated command-line strings (the
entries for swfoc.exe and StarWarsG.exe) so the renderer receives the actual
command text (verify against repro-bundle.json lines showing no '|' present),
and instead implement escaping of any pipe characters at generation time by
replacing '|' with '\|' before writing Markdown if pipes can appear in future
command lines.

Comment on lines 26 to 36
class CapabilityProbe {
public:
CapabilityProbe() = default;

void markAvailable(const std::string& featureId, const std::string& reasonCode = "CAPABILITY_PROBE_PASS");
bool isAvailable(const std::string& featureId) const;
const std::unordered_map<std::string, CapabilityEntry>& snapshot() const noexcept;

private:
std::unordered_map<std::string, CapabilityEntry> capabilities_;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

capabilities_ map is not thread-safe — same data-race risk as HookLifecycleManager.

markAvailable (write) and isAvailable/snapshot (reads) on the same object from different threads constitute a data race under the C++ memory model. Add a mutable std::shared_mutex mutex_ and use std::unique_lock for writes and std::shared_lock for reads — mirroring the fix suggested for HookLifecycleManager.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp`
around lines 26 - 36, The capabilities_ map in CapabilityProbe is vulnerable to
data races when markAvailable (writer) and isAvailable/snapshot (readers) run
concurrently; add a mutable std::shared_mutex member (e.g., mutable
std::shared_mutex mutex_) to the class and protect markAvailable with a
std::unique_lock<std::shared_mutex> while protecting isAvailable and snapshot
with std::shared_lock<std::shared_mutex>, ensuring safe concurrent access to
capabilities_ and preserving return types (const snapshot should still return
const std::unordered_map<std::string, CapabilityEntry>&).

Comment on lines 26 to 37
class HookLifecycleManager {
public:
HookLifecycleManager() = default;

void markInstalled(const std::string& hookId);
void markFailed(const std::string& hookId, const std::string& reasonCode);
void markRolledBack(const std::string& hookId);
HookRecord get(const std::string& hookId) const;

private:
std::unordered_map<std::string, HookRecord> hooks_;
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

No thread-safety on hooks_ — potential data race in hook/game-thread contexts.

HookLifecycleManager is stateful and likely accessed from both the game thread (hook callbacks) and the trainer/management thread. std::unordered_map is not thread-safe for concurrent writes or mixed read/write access; concurrent markInstalled/markFailed/get calls cause undefined behaviour.

Consider protecting hooks_ with a std::shared_mutex (C++17), using a reader-writer lock pattern to allow concurrent reads via get:

🔒 Proposed thread-safety fix
 `#pragma` once

 `#include` <string>
 `#include` <unordered_map>
+#include <shared_mutex>

 ...

 class HookLifecycleManager {
 public:
     HookLifecycleManager() = default;

     void markInstalled(const std::string& hookId);
     void markFailed(const std::string& hookId, const std::string& reasonCode);
     void markRolledBack(const std::string& hookId);
     HookRecord get(const std::string& hookId) const;

 private:
     std::unordered_map<std::string, HookRecord> hooks_;
+    mutable std::shared_mutex mutex_;
 };

Then in .cpp — write methods acquire std::unique_lock<std::shared_mutex>, get() acquires std::shared_lock<std::shared_mutex>.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp`
around lines 26 - 37, The hooks_ map in HookLifecycleManager is unprotected and
can race across threads; add a std::shared_mutex member (e.g., mutex_) and
include <shared_mutex>, then change the writer methods markInstalled,
markFailed, markRolledBack to acquire a std::unique_lock<std::shared_mutex>
before mutating hooks_, and change get to acquire a
std::shared_lock<std::shared_mutex> for read-only access so concurrent readers
are allowed while writers are exclusive.

Comment on lines +240 to +249
private static ExtenderResult ParseResponse(string commandId, string? responseLine)
{
if (string.IsNullOrWhiteSpace(responseLine))
{
return CreateNoResponseResult(commandId);
}

var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions);
return result ?? CreateInvalidResponseResult(commandId);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

System.Text.Json JsonSerializer.Deserialize JsonException malformed JSON

💡 Result:

System.Text.Json.JsonSerializer.Deserialize(...) throws JsonException when either the JSON text isn’t valid JSON, there’s extra/trailing data beyond a single JSON value, or the JSON shape/types don’t match the .NET target type. The exception message typically includes Path, LineNumber, and BytePositionInLine, which tell you exactly where parsing/conversion failed. [1][2]

1) First: confirm you’re actually deserializing JSON

A very common cause is that the payload is empty, truncated, or not JSON at all (for example an HTML error page, proxy response, 429/500 text, etc.). This often shows up as errors like “invalid start of a value” (for example '<…') or “input does not contain any JSON tokens”. [4][5]

Debug pattern (HTTP): log the raw response before deserializing.

var body = await response.Content.ReadAsStringAsync();
try
{
    return JsonSerializer.Deserialize<MyType>(body);
}
catch (JsonException ex)
{
    throw new InvalidOperationException(
        $"JSON parse failed: {ex.Message}\nRaw body:\n{body}", ex);
}

2) If it’s “almost JSON”: enable the specific allowances you need

By default, System.Text.Json rejects comments and trailing commas. You can allow them via options. [3]

var options = new JsonSerializerOptions
{
    ReadCommentHandling = JsonCommentHandling.Skip,
    AllowTrailingCommas = true,
};
var obj = JsonSerializer.Deserialize<MyType>(json, options);

3) If JSON is valid but doesn’t match your .NET type

You’ll get JsonException like “The JSON value could not be converted to …” with a Path pointing to the offending property. [2]

Typical fixes:

  • Change your model property types / nullability to match payload.
  • If numbers come as strings (or vice versa), use JsonNumberHandling (or a custom converter). Microsoft docs cover handling “invalid” forms like quoted numbers. [3]

4) If you have multiple top-level values / trailing junk

Deserialize expects one JSON value (one object/array/etc.). Extra data after it causes failure. If you truly need to read multiple values, use Utf8JsonReader configured with AllowMultipleValues. [2][5]


If you paste the exact exception message (including Path/LineNumber/BytePositionInLine) and a small sample of the JSON around that position, I can pinpoint the cause and the minimal fix.

Sources: [1] https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/converters-how-to
[2] https://learn.microsoft.com/en-us/dotnet/api/system.text.json.jsonserializer.deserialize
[3] https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/invalid-json
[4] https://learn.microsoft.com/en-us/answers/questions/2201902/jsonreaderexception-when-trying-to-connect-use-azu
[5] https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/use-utf8jsonreader


🏁 Script executed:

# Find the SendCoreAsync method to understand exception handling context
fd "NamedPipeExtenderBackend" -t f

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 196


🏁 Script executed:

# Search for SendCoreAsync and exception handling
rg "SendCoreAsync" src/SwfocTrainer.Runtime --context 15 -A 10

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 2480


🏁 Script executed:

# Check how ParseResponse is called and what catches exceptions
rg "ParseResponse" src/SwfocTrainer.Runtime -B 5 -A 5

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 2271


🏁 Script executed:

# Get the full SendCoreAsync method
rg "private static async Task<ExtenderResult> SendCoreAsync" src/SwfocTrainer.Runtime -A 50

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 5102


🏁 Script executed:

# Look for catch blocks in SendCoreAsync and surrounding code
rg "catch|SendCoreAsync" src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs -B 2 -A 8

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 1768


🏁 Script executed:

# Find all CreateXxxResult methods to understand what states map to what
rg "Create.*Result\(" src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs -B 2 -A 5

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 1808


🏁 Script executed:

# Check the entire file to understand exception flow and result creation
cat -n src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs | head -150

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 6524


🏁 Script executed:

# Get full implementation of Create*Result methods to see HookState values
rg "CreateInvalidResponseResult|CreateUnreachableResult|CreateTimeoutResult|CreateNoResponseResult" src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs -A 10

Repository: Prekzursil/SWFOC-Mod-Menu

Length of output: 2480


Catch JsonException in ParseResponse to avoid triggering unnecessary host restarts.

When JsonSerializer.Deserialize encounters malformed JSON, it throws JsonException. This exception bubbles up through SendCoreAsync's generic catch (Exception) handler and returns CreateUnreachableResult with HookState: "unreachable". In SendAsync, the "unreachable" state triggers the retry logic that restarts the bridge host process—even though the backend is reachable; the problem is just a malformed response.

Catch JsonException directly in ParseResponse and map it to CreateInvalidResponseResult, which returns HookState: "invalid_response" and will not trigger an unnecessary host restart.

🔧 Suggested fix
private static ExtenderResult ParseResponse(string commandId, string? responseLine)
{
    if (string.IsNullOrWhiteSpace(responseLine))
    {
        return CreateNoResponseResult(commandId);
    }
-
-   var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions);
-   return result ?? CreateInvalidResponseResult(commandId);
+   try
+   {
+       var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions);
+       return result ?? CreateInvalidResponseResult(commandId);
+   }
+   catch (JsonException)
+   {
+       return CreateInvalidResponseResult(commandId);
+   }
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private static ExtenderResult ParseResponse(string commandId, string? responseLine)
{
if (string.IsNullOrWhiteSpace(responseLine))
{
return CreateNoResponseResult(commandId);
}
var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions);
return result ?? CreateInvalidResponseResult(commandId);
}
private static ExtenderResult ParseResponse(string commandId, string? responseLine)
{
if (string.IsNullOrWhiteSpace(responseLine))
{
return CreateNoResponseResult(commandId);
}
try
{
var result = JsonSerializer.Deserialize<ExtenderResult>(responseLine, JsonOptions);
return result ?? CreateInvalidResponseResult(commandId);
}
catch (JsonException)
{
return CreateInvalidResponseResult(commandId);
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Runtime/Services/NamedPipeExtenderBackend.cs` around lines
240 - 249, In ParseResponse, catch System.Text.Json.JsonException around the
JsonSerializer.Deserialize call and return
CreateInvalidResponseResult(commandId) when deserialization fails; keep the
existing null/whitespace check that returns CreateNoResponseResult(commandId)
and the existing path that returns the non-null result, but ensure any
JsonException does not propagate (so it won't reach SendCoreAsync's generic
catch) by handling it inside ParseResponse and mapping to
CreateInvalidResponseResult.

Comment on lines +9 to +10
| aotr_1397421866_swfoc | <fill pid/mode/reasonCode> | <pass/fail/skip + reason> | <pass/fail/skip + reason> | <overall> |
| roe_3447786229_swfoc | <fill pid/mode/reasonCode> | <pass/fail/skip + reason> | <pass/fail/skip + reason> | <overall> |
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Evidence table contains only unfilled template placeholders — this artifact provides no evidentiary value.

Both profile rows (aotr_1397421866_swfoc and roe_3447786229_swfoc) still hold the verbatim template strings <fill pid/mode/reasonCode>, <pass/fail/skip + reason>, and <overall>. The file is committed as a live-run evidence artifact, but none of the per-profile attach diagnostics, workflow outcomes, or overall results have been populated. The document does not substantiate the claims made in the PR about the AOTR/ROE run.

These cells must be filled with the actual values captured during run 20260218-213838 (pid, mode, reasonCode from the capability probe snapshot; pass/fail/skip with reason codes from the tactical toggle and hero helper workflows; and the derived overall result per profile) before this file can serve as evidence.

📝 Example of a filled row (replace with actual values)
-| aotr_1397421866_swfoc | <fill pid/mode/reasonCode> | <pass/fail/skip + reason> | <pass/fail/skip + reason> | <overall> |
-| roe_3447786229_swfoc  | <fill pid/mode/reasonCode> | <pass/fail/skip + reason> | <pass/fail/skip + reason> | <overall> |
+| aotr_1397421866_swfoc | pid=12345 / Sdk / CAPABILITY_PROBE_PASS | pass | pass | pass |
+| roe_3447786229_swfoc  | pid=12345 / Sdk / CAPABILITY_PROBE_PASS | pass | pass | pass |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@TestResults/runs/20260218-213838/issue-19-evidence-template.md` around lines
9 - 10, The evidence table contains unfilled placeholders for the two profile
rows aotr_1397421866_swfoc and roe_3447786229_swfoc; replace each template cell
(<fill pid/mode/reasonCode>, <pass/fail/skip + reason>, <overall>) with the
actual captured values from run 20260218-213838: fill pid/mode/reasonCode with
the capability probe snapshot values, replace the two workflow result cells with
the tactical toggle and hero helper outcomes including reason codes (e.g.,
"pass: reason" or "fail: reason"), and compute/populate the derived overall
result for each profile so the table reflects real evidence.

Comment on lines +1 to +5
param(
[ValidateSet("Auto", "Windows", "Wsl")][string]$Mode = "Auto",
[string]$Configuration = "Release",
[string]$BuildDir = "native/build-win-vs"
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Missing deterministic smoke/fixture mode (guideline violation).

This script has concrete side effects (configure, compile, artifact copy) but exposes no -WhatIf or -DryRun path and has no associated deterministic smoke command documented in a script help block. As per coding guidelines, every tooling script change must include a deterministic fixture or smoke command.

♻️ Minimal addition: add a -DryRun switch and script help
+<#
+.SYNOPSIS
+  Orchestrates the native SwfocExtender build (Windows VS/CMake or WSL).
+.PARAMETER Mode
+  Build mode: Auto (default), Windows, Wsl.
+.PARAMETER Configuration
+  CMake build configuration (default: Release).
+.PARAMETER BuildDir
+  Output directory for the Windows build (default: native/build-win-vs).
+.PARAMETER DryRun
+  Emit planned actions without executing CMake or copying artifacts.
+.NOTES
+  Windows build requires CMake >= 4.2 when Visual Studio 18 (2026) is detected.
+  WSL mode requires WSL installed and configured; paths are converted via wsl.exe wslpath.
+.EXAMPLE
+  # Smoke-test: verify cmake/VS detection without building
+  .\build-native.ps1 -DryRun
+#>
 param(
     [ValidateSet("Auto", "Windows", "Wsl")][string]$Mode = "Auto",
     [string]$Configuration = "Release",
-    [string]$BuildDir = "native/build-win-vs"
+    [string]$BuildDir = "native/build-win-vs",
+    [switch]$DryRun
 )

Pass $DryRun into Invoke-WindowsBuild and Invoke-WslBuild to gate the actual cmake / wsl.exe invocations.

As per coding guidelines, every tooling script change must include a deterministic fixture or smoke command and Windows live target assumptions must be kept explicit in documentation and script help.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/native/build-native.ps1` around lines 1 - 5, Add a deterministic
smoke/dry-run path and a script help block: update the param block to include a
[switch]$DryRun and document the script with a comment-based help header
describing Mode, Configuration, BuildDir and the new DryRun switch; then
propagate $DryRun into the build invocations so that Invoke-WindowsBuild and
Invoke-WslBuild respect it and gate any real cmake/wsl.exe or artifact copy
operations (i.e., when $DryRun is set, run only the deterministic smoke
steps/logging and skip side-effecting commands).

Comment on lines +30 to +32
$outputDir = Join-Path $repoRoot "TestResults/research/$RunId"
New-Item -ItemType Directory -Path $outputDir -Force | Out-Null
$OutputPath = Join-Path $outputDir "signature-pack.json"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Output path TestResults/research/$RunId violates the TestResults/runs/<runId>/ mandate.

Same issue as in capture-binary-fingerprint.ps1. Migrate to TestResults/runs/$RunId/research/ so artifacts are discoverable by downstream validation and repro-bundle tooling.

As per coding guidelines: "Per-run output isolation must be maintained under TestResults/runs/<runId>/ directory structure".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/generate-signature-candidates.ps1` around lines 30 - 32, The
script currently sets $outputDir to TestResults/research/$RunId which violates
per-run isolation; change $outputDir to join TestResults/runs/$RunId/research
(e.g., replace "TestResults/research/$RunId" with
"TestResults/runs/$RunId/research") and ensure $OutputPath (signature-pack.json)
is built from that new $outputDir; update any New-Item or downstream uses that
reference $outputDir so artifacts are created under
TestResults/runs/<runId>/research.

Comment on lines +53 to +66
$anchorIds = @($anchors | ForEach-Object { $_.id } | Sort-Object -Unique)

$operations = [ordered]@{
list_selected = [ordered]@{ requiredAnchors = @(); optionalAnchors = @("selected_hp", "selected_shield", "selected_owner_faction") }
list_nearby = [ordered]@{ requiredAnchors = @(); optionalAnchors = @("selected_hp", "selected_shield", "selected_owner_faction") }
spawn = [ordered]@{ requiredAnchors = @("spawn_point_write"); optionalAnchors = @("planet_owner") }
kill = [ordered]@{ requiredAnchors = @("selected_hp"); optionalAnchors = @("selected_shield") }
set_owner = [ordered]@{ requiredAnchors = @("selected_owner_faction"); optionalAnchors = @("planet_owner") }
teleport = [ordered]@{ requiredAnchors = @("selected_speed"); optionalAnchors = @("selected_owner_faction") }
set_planet_owner = [ordered]@{ requiredAnchors = @("planet_owner"); optionalAnchors = @() }
set_hp = [ordered]@{ requiredAnchors = @("selected_hp"); optionalAnchors = @("selected_shield") }
set_shield = [ordered]@{ requiredAnchors = @("selected_shield"); optionalAnchors = @("selected_hp") }
set_cooldown = [ordered]@{ requiredAnchors = @("selected_cooldown_multiplier"); optionalAnchors = @("selected_speed") }
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

$anchorIds is computed but never used — missing cross-validation of operation anchors.

$anchorIds (line 53) accumulates all anchor IDs derived from the profile's signatureSets. It is never referenced again. The $operations block (lines 55–66) hardcodes anchor names like "spawn_point_write", "selected_hp", "planet_owner", etc. without checking that these IDs actually exist in $anchors. This produces a signature pack that may silently reference non-existent anchors, causing downstream validation failures.

The dead variable strongly indicates that a cross-validation step was planned but never implemented.

🐛 Proposed fix — add cross-validation after building $operations
 $anchorIds = @($anchors | ForEach-Object { $_.id } | Sort-Object -Unique)
+
+# Cross-validate operation anchor references against the generated anchor set
+foreach ($opName in $operations.Keys) {
+    $op = $operations[$opName]
+    foreach ($ref in @($op.requiredAnchors) + @($op.optionalAnchors)) {
+        if ($ref -and $anchorIds -notcontains $ref) {
+            throw "[ERR_ANCHOR_REF_INVALID] Operation '$opName' references unknown anchor '$ref'"
+        }
+    }
+}
🧰 Tools
🪛 PSScriptAnalyzer (1.24.0)

[warning] 53-53: The variable 'anchorIds' is assigned but never used.

(PSUseDeclaredVarsMoreThanAssignments)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/generate-signature-candidates.ps1` around lines 53 - 66,
$anchorIds is collected but never used, so implement a cross-validation step
after building $operations that verifies each operation's requiredAnchors and
optionalAnchors exist in $anchorIds; iterate the [ordered]$operations (symbol:
$operations) and for each operation check its requiredAnchors and
optionalAnchors against $anchorIds, emit processLogger/error or Write-Error for
any missing required anchors (and fail/exit or mark the operation invalid) and
emit warnings for missing optional anchors (or remove them), ensuring you
reference $anchors/$anchorIds when validating so no operation references
nonexistent anchor IDs.

Comment on lines +16 to +18
if ([string]::IsNullOrWhiteSpace($OutputRoot)) {
$OutputRoot = Join-Path $repoRoot "TestResults/research/$RunId"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Output path TestResults/research/ violates the per-run isolation convention.

The default $OutputRoot is set to TestResults/research/$RunId but all tooling scripts in this repo are required to write per-run output under TestResults/runs/<runId>/.

🔧 Proposed fix
-    $OutputRoot = Join-Path $repoRoot "TestResults/research/$RunId"
+    $OutputRoot = Join-Path $repoRoot "TestResults/runs/$RunId"

Based on learnings: "Applies to tools/**/*.{sh,bat,ps1,py,js,ts} : Per-run output isolation must be maintained under TestResults/runs// directory structure"

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if ([string]::IsNullOrWhiteSpace($OutputRoot)) {
$OutputRoot = Join-Path $repoRoot "TestResults/research/$RunId"
}
if ([string]::IsNullOrWhiteSpace($OutputRoot)) {
$OutputRoot = Join-Path $repoRoot "TestResults/runs/$RunId"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/run-capability-intel.ps1` around lines 16 - 18, The default
$OutputRoot currently points to "TestResults/research/$RunId" which violates the
per-run isolation rule; update the assignment of $OutputRoot (when
[string]::IsNullOrWhiteSpace($OutputRoot)) to use the run-scoped directory under
TestResults/runs by setting $OutputRoot = Join-Path $repoRoot
"TestResults/runs/$RunId" (or "TestResults/runs/$RunId/research" if you want to
preserve the research subfolder) so all outputs are placed under the
TestResults/runs/<RunId>/ hierarchy; adjust any dependent uses of $OutputRoot if
needed.

)

Write-Host "=== Running $Name ==="
Write-Output "=== Running $Name ==="
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Write-Output inside Invoke-LiveTest contaminates the function's return value — all TRX reads will silently fail

In PowerShell, Write-Output writes to the success output stream (stream 1), which is collected when a function's result is assigned to a variable. Write-Host (the previous implementation per the AI summary) writes to the host/information stream and is never collected this way.

Because Invoke-LiveTest is called with:

$executedTrx = Invoke-LiveTest -Name $test.Name ...   # line 326

$executedTrx becomes a two-element array:

@("=== Running Live Credits ===", "C:\...\live-credits.trx")

instead of the expected string path.

Read-TrxSummary declares [string]$TrxPath, so PowerShell coerces the array using $OFS (default: space):

$TrxPath = "=== Running Live Credits === C:\...\live-credits.trx"

Test-Path returns $false for this garbage path, the 120-second retry loop exhausts, and every test is recorded with Outcome = "Missing" — a silent, catastrophic regression with no thrown exception.

🐛 Proposed fix
-    Write-Output "=== Running $Name ==="
+    Write-Host "=== Running $Name ==="
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/run-live-validation.ps1` at line 127, The call inside Invoke-LiveTest
that uses Write-Output ("=== Running $Name ===") contaminates the function's
returned value (seen by $executedTrx) — replace that Write-Output with a
host/information-only writer (e.g., Write-Host or Write-Information with
-InformationAction Continue) so the banner text is not emitted on the success
output stream; update the line in Invoke-LiveTest where "=== Running $Name ==="
is emitted and verify Read-TrxSummary still receives a single string path (no
extraneous output).

- remove PowerShell automatic-variable assignments in native and live-validation scripts
- apply file-scoped cppcheck missingIncludeSystem suppression on native bridge/core/plugin files only
- document targeted suppression rationale in remediation checklist

Co-authored-by: Codex <noreply@openai.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0c4930d595

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +185 to +189
if (string.Equals(backendPreference, "helper", StringComparison.OrdinalIgnoreCase))
{
return defaultBackend == ExecutionBackendKind.Save
? ExecutionBackendKind.Save
: ExecutionBackendKind.Helper;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Restrict helper backend preference to helper actions

Routing every non-save action to ExecutionBackendKind.Helper when backendPreference is "helper" causes memory/code-patch actions to bypass real memory writes and return a dispatched helper success instead. In this commit, helper execution remains a dispatch stub (ExecuteHelperActionAsync) rather than a memory backend, so profiles that set backendPreference: "helper" can silently report success for actions like set_unit_cap without applying the mutation.

Useful? React with 👍 / 👎.

Prekzursil and others added 2 commits February 22, 2026 13:56
- Move cppcheck file suppressions to the top of affected native files
- Replace designated initializers with explicit member assignment to improve analyzer visibility
- Mark protocol/contract carrier members maybe_unused where read paths are deferred

Co-authored-by: Codex <noreply@openai.com>
- Add file-level cppcheck unusedStructMember suppression only on native protocol headers
- Keep suppression scoped to PR49 Codacy blocker files

Co-authored-by: Codex <noreply@openai.com>
@sonarqubecloud
Copy link

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (8)
native/SwfocExtender.Core/src/CapabilityProbe.cpp (1)

5-11: CapabilityState::Experimental is unreachable via the public API.

markAvailable unconditionally sets state = CapabilityState::Verified, leaving Experimental a dead enumerator. Callers probing snapshot() can never observe an Experimental entry through any current write path. If this state is intentional future scaffolding, document it; otherwise either expose a markExperimental overload or remove the enumerator.

♻️ Minimal fix: add an optional state parameter to `markAvailable`

In CapabilityProbe.hpp:

-    void markAvailable(const std::string& featureId, const std::string& reasonCode = "CAPABILITY_PROBE_PASS");
+    void markAvailable(const std::string& featureId,
+                       const std::string& reasonCode = "CAPABILITY_PROBE_PASS",
+                       CapabilityState state = CapabilityState::Verified);

In CapabilityProbe.cpp:

-void CapabilityProbe::markAvailable(const std::string& featureId, const std::string& reasonCode) {
+void CapabilityProbe::markAvailable(const std::string& featureId, const std::string& reasonCode, CapabilityState state) {
     CapabilityEntry entry {};
     entry.available = true;
-    entry.state = CapabilityState::Verified;
+    entry.state = state;
     entry.reasonCode = reasonCode;
     capabilities_[featureId] = entry;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Core/src/CapabilityProbe.cpp` around lines 5 - 11, The
current markAvailable(const std::string& featureId, const std::string&
reasonCode) always sets entry.state = CapabilityState::Verified making
CapabilityState::Experimental unreachable; change the API to allow callers to
set the state by adding an optional state parameter (e.g., add a third parameter
with default CapabilityState::Verified) in the declaration in
CapabilityProbe.hpp and update the implementation in CapabilityProbe.cpp to use
that parameter when assigning entry.state (leave CapabilityEntry, capabilities_
and snapshot() behavior unchanged); alternatively, if Experimental is not needed
remove the enumerator and any references—otherwise add a new markExperimental
helper that sets entry.state = CapabilityState::Experimental and sets
capabilities_[featureId] accordingly.
tools/research/generate-signature-candidates.ps1 (2)

74-74: sourceProfilePath embeds an absolute local filesystem path in the artifact.

[System.IO.Path]::GetFullPath($ProfilePath) resolves to a machine-specific absolute path (e.g., C:\agents\…\profiles\…). This path will be baked into every generated signature-pack.json, making artifacts non-portable and potentially leaking CI agent or developer machine paths in shared repro bundles.

♻️ Proposed fix — store a repo-relative path instead
-    sourceProfilePath = [System.IO.Path]::GetFullPath($ProfilePath)
+    sourceProfilePath = [System.IO.Path]::GetFullPath($ProfilePath).Replace([System.IO.Path]::GetFullPath($repoRoot), '').TrimStart([System.IO.char]'\', [System.IO.char]'/')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/generate-signature-candidates.ps1` at line 74,
sourceProfilePath currently uses [System.IO.Path]::GetFullPath($ProfilePath)
which bakes an absolute, machine-specific path into signature-pack.json; change
it to compute and store a repository-relative path instead (e.g., use
[System.IO.Path]::GetRelativePath($RepoRoot, $ProfilePath) or compute a relative
URI from $RepoRoot/$PSScriptRoot to $ProfilePath) so artifacts are portable and
don't leak CI/dev paths; update the assignment to sourceProfilePath to produce
the repo-relative string and ensure downstream consumers still resolve it
relative to the repository root when reading signature-pack.json.

79-79: Set-Content -Encoding UTF8 emits a UTF-8 BOM on Windows PowerShell 5.1.

On Windows PowerShell 5.1, -Encoding UTF8 produces a UTF-8-with-BOM file. Downstream JSON consumers (schema validators, ConvertFrom-Json in other scripts, and Node.js JSON.parse) may fail or produce unexpected results when a BOM prefix is present. PowerShell 7+ changed this behavior and no longer adds a BOM with -Encoding UTF8.

♻️ Proposed fix — use .NET write to ensure BOM-less UTF-8
-($result | ConvertTo-Json -Depth 12) | Set-Content -Path $OutputPath -Encoding UTF8
+$json = $result | ConvertTo-Json -Depth 12
+[System.IO.File]::WriteAllText($OutputPath, $json, [System.Text.UTF8Encoding]::new($false))

This approach works consistently across PowerShell 5.1 and 7+.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/research/generate-signature-candidates.ps1` at line 79, The current
write uses Set-Content -Encoding UTF8 which emits a UTF-8 BOM on Windows
PowerShell 5.1; replace the Set-Content call that writes ($result |
ConvertTo-Json -Depth 12) to $OutputPath with a .NET write that uses a UTF8
encoding without BOM (e.g. call [System.IO.File]::WriteAllText($OutputPath,
$jsonString, [System.Text.UTF8Encoding]::new($false))) so the JSON is written
BOM-less and behaves consistently across PowerShell 5.1 and 7+.
native/SwfocExtender.Plugins/include/swfoc_extender/plugins/EconomyPlugin.hpp (1)

27-29: lockEnabled_ and lockedCreditsValue_ are write-only — they are stored but never read.

Both atomics are set in execute() (Lines 31–32 of EconomyPlugin.cpp) but consulted by neither capabilitySnapshot() nor any other current method. If they represent future state that will be exposed later, add a brief comment to that effect; otherwise, removing them eliminates dead state and reduces the cognitive overhead of the execute() atomic-store group.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@native/SwfocExtender.Plugins/include/swfoc_extender/plugins/EconomyPlugin.hpp`
around lines 27 - 29, The atomics lockEnabled_ and lockedCreditsValue_ in class
EconomyPlugin are only written (in execute()) but never read (e.g.,
capabilitySnapshot() or any other method); either remove these unused members to
eliminate dead state or add a clear comment above their declarations explaining
they are reserved for future public state exposure and will be read by
forthcoming methods, and ensure execute() remains the single writer; if you
intend them to be consumed now, update capabilitySnapshot() or the appropriate
accessor to read them instead.
native/SwfocExtender.Bridge/src/BridgeHostMain.cpp (2)

33-60: EscapeJson is duplicated from NamedPipeBridgeServer.cpp.

This is the same issue flagged in NamedPipeBridgeServer.cpp. Both copies should be replaced by a shared internal utility header.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Bridge/src/BridgeHostMain.cpp` around lines 33 - 60,
Replace the duplicated EscapeJson implementation in BridgeHostMain.cpp and
NamedPipeBridgeServer.cpp with a single shared internal utility: create an
internal header (e.g., InternalJsonUtils.h) that declares EscapeJson and provide
a single inline/inline-static implementation or a .cpp implementation (to avoid
ODR) that both BridgeHostMain.cpp and NamedPipeBridgeServer.cpp include; then
remove the local EscapeJson definitions from both files and update includes to
use the new header so only one canonical EscapeJson (function name EscapeJson)
is used across the codebase.

274-284: No POSIX signal handler — WaitForShutdownSignal loops forever on non-Windows.

g_running is initialized true and only set to false inside CtrlHandler, which is #if defined(_WIN32) only. On Linux/macOS (including WSL, which the PR explicitly mentions as a fallback target), there is no SIGINT/SIGTERM handler, so the process can only be killed with SIGKILL.

🔧 Proposed fix — add POSIX signal handler
 `#if` defined(_WIN32)
 `#define` NOMINMAX
 `#include` <windows.h>
 `#endif`
+
+#if !defined(_WIN32)
+#include <csignal>
+#endif
 void InstallCtrlHandler() {
 `#if` defined(_WIN32)
     SetConsoleCtrlHandler(CtrlHandler, TRUE);
+#else
+    auto posixHandler = [](int) { g_running.store(false); };
+    std::signal(SIGINT, posixHandler);
+    std::signal(SIGTERM, posixHandler);
 `#endif`
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Bridge/src/BridgeHostMain.cpp` around lines 274 - 284,
On POSIX the program never registers a SIGINT/SIGTERM handler so g_running stays
true and WaitForShutdownSignal spins forever; add a POSIX signal handler that
mirrors CtrlHandler behaviour (sets g_running=false) and register it from
InstallCtrlHandler when not on _WIN32. Specifically, implement a function (e.g.,
PosixSignalHandler) that accepts the signal and sets g_running.store(false), and
in InstallCtrlHandler's non-WIN32 branch use sigaction (or signal) to register
handlers for SIGINT and SIGTERM so WaitForShutdownSignal can exit cleanly.
native/SwfocExtender.Plugins/src/EconomyPlugin.cpp (1)

30-32: Three separate atomic stores are not atomic as a group — concurrent readers can observe a torn state.

hookInstalled_.store(true);           // observer sees "installed"...
lockEnabled_.store(request.lockValue); // ...but lock state is still old
lockedCreditsValue_.store(request.intValue);

If execute() is ever called concurrently with a future snapshot or diagnostic reader that inspects more than hookInstalled_, an intermediate view (hookInstalled_=true, lockEnabled_=false, lockedCreditsValue_=0) is observable. Grouping the three values in a single struct and replacing the atomics with a std::mutex-guarded assignment (or an std::atomic<std::shared_ptr<StateSnapshot>>) closes the gap and makes the invariant explicit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Plugins/src/EconomyPlugin.cpp` around lines 30 - 32,
Multiple independent atomic stores (hookInstalled_, lockEnabled_,
lockedCreditsValue_) can produce torn intermediate states visible to concurrent
readers (e.g., execute()); fix by grouping these fields into a single snapshot
and performing an atomic swap of the snapshot or a mutex-guarded assignment.
Define a StateSnapshot struct containing hookInstalled, lockEnabled and
lockedCreditsValue, replace the three atomics with either a
std::atomic<std::shared_ptr<StateSnapshot>> statePtr_ and publish a new snapshot
with statePtr_.store(std::make_shared<StateSnapshot>{...}) in the place where
hookInstalled_/lockEnabled_/lockedCreditsValue_ were set, or protect a single
non-atomic StateSnapshot with a std::mutex and assign/read under the mutex;
update execute() and any readers to load the shared_ptr (or lock the mutex) and
read the consistent snapshot.
native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp (1)

89-116: EscapeJson is copy-pasted verbatim into BridgeHostMain.cpp.

Both translation units define an identical EscapeJson in their anonymous namespaces. Promote it to a shared internal utility header (e.g., native/SwfocExtender.Bridge/src/JsonUtils.hpp) and include it from both .cpp files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 89 -
116, There are duplicate definitions of EscapeJson in anonymous namespaces
across two translation units; extract that implementation into a shared internal
header (e.g., JsonUtils.hpp) declaring an inline EscapeJson(const std::string&)
function (or place it in a unique internal namespace) with include guards/pragma
once, remove the duplicate definitions from NamedPipeBridgeServer.cpp and
BridgeHostMain.cpp, and `#include` the new header from both .cpp files so they
share the single implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp`:
- Around line 132-143: The pipe is created in CreateBridgePipe with message-mode
flags which causes ReadCommandLine to silently drop messages larger than
kPipeBufferSize; change the pipe to byte mode by removing PIPE_TYPE_MESSAGE and
PIPE_READMODE_MESSAGE (use byte mode) in CreateBridgePipe, and/or update
ReadCommandLine to explicitly loop and handle ERROR_MORE_DATA from ReadFile
(accumulate partial reads until no more data) so commands > kPipeBufferSize are
correctly appended; apply the same change for the other CreateNamedPipe usage
referenced in the diff (the 152-166 area) and ensure ReadCommandLine still
relies on '\n' framing after switching to byte mode.
- Around line 32-87: ExtractStringValue and ExtractObjectJson are fragile by
manually scanning for quotes/braces and fail on escaped quotes or braces inside
strings; replace both helpers with a proper JSON parse using a real parser
(e.g., include the nlohmann::json single-header), parse the incoming json into a
nlohmann::json object, then retrieve string fields with
at(key).get<std::string>() for commandId/featureId and retrieve payloadJson by
serializing the nested object (or using dump()) and handle parse errors by
catching nlohmann::json::parse_error to return empty/default values; update any
callers that used ExtractStringValue/ExtractObjectJson to use the new
parsed-object approach (functions: ExtractStringValue, ExtractObjectJson) so
escaped characters and strings are handled correctly.

In `@native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp`:
- Around line 22-25: Replace the C++17 attribute [[maybe_unused]] on the struct
members available, state (type CapabilityState) and reasonCode with cppcheck
suppressions so Codacy stops flagging unusedStructMember: either put an inline
comment suppression for cppcheck (e.g. a cppcheck-suppress unusedStructMember
comment) next to each member declaration in CapabilityProbe.hpp, or add a
file-level directive (cppcheck-suppress-file unusedStructMember) alongside the
existing missingIncludeSystem suppression at the top of the file; ensure the
symbols available, state and reasonCode remain unchanged except for replacing
the attribute with the cppcheck-compatible comment.

In `@tools/run-live-validation.ps1`:
- Around line 445-456: The current hard-gate block using
$RequireNonBlockedClassification overwrites the original exception in
$fatalError when bundle generation/validation fails and $bundleClassification is
empty; modify the logic around the $RequireNonBlockedClassification check to
preserve the original exception by wrapping it as an inner/inner-like exception
or appending its message to the new InvalidOperationException (e.g., when
setting $fatalError keep the original exception object and include its
Message/ToString() in the new exception), and change the blocked-value test from
an explicit list ($bundleClassification -in
@("blocked_environment","blocked_profile_mismatch")) to a prefix check (e.g.,
test whether $bundleClassification -like "blocked_*" or use -match '^blocked_')
so future blocked_* classifications are caught; ensure the final throw
$fatalError still carries the original failure details.

---

Duplicate comments:
In
`@native/SwfocExtender.Bridge/include/swfoc_extender/bridge/NamedPipeBridgeServer.hpp`:
- Around line 37-56: The class NamedPipeBridgeServer lacks a destructor so its
member std::thread worker_ can call std::terminate() if still joinable; add a
destructor ~NamedPipeBridgeServer() that ensures the worker is stopped and
joined: call stop() (or otherwise clear running_) and then if
(worker_.joinable()) worker_.join(); locate the class declaration/implementation
for NamedPipeBridgeServer and implement the destructor there to safely shut down
the runLoop and join worker_ to avoid terminate on destruction.

In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp`:
- Around line 208-236: stop() can deadlock because the wake-up CreateFileA trick
may fail; instead, abort the worker's blocking ConnectNamedPipe by
cancelling/closing the server pipe handle directly. Expose or store the server
pipe HANDLE used by the worker (e.g., serverHandle_ created in the worker where
CreateNamedPipe/ConnectNamedPipe is called), and in
NamedPipeBridgeServer::stop() after setting running_ to false call
CancelIoEx(serverHandle_, nullptr) (or CancelIo on older targets) and
CloseHandle(serverHandle_) if handle is valid before joining worker_; keep the
existing CreateFileA wake attempt as a fallback but ensure
CancelIoEx/CloseHandle always runs so worker_.join() cannot deadlock.

In `@native/SwfocExtender.Core/include/swfoc_extender/core/CapabilityProbe.hpp`:
- Around line 27-37: The class CapabilityProbe exposes an unprotected mutable
unordered_map capabilities_ causing data races: protect all accesses in
markAvailable, isAvailable, and snapshot by adding a synchronization primitive
(e.g., std::mutex or std::shared_mutex) and locking appropriately (exclusive
lock in markAvailable, shared/read lock in isAvailable) and change snapshot() to
return a copy of the map (return by value) instead of a const reference to avoid
iterator/rehash invalidation; update the methods (markAvailable, isAvailable,
snapshot) to acquire the correct lock around any use of capabilities_ to ensure
thread-safety.

In
`@native/SwfocExtender.Core/include/swfoc_extender/core/HookLifecycleManager.hpp`:
- Around line 27-38: hooks_ is accessed by concurrent readers/writers but
HookLifecycleManager has no synchronization; add a mutable std::shared_mutex
member (e.g., hooks_mutex_) to the class and protect writer methods
markInstalled, markFailed, markRolledBack with
std::unique_lock<std::shared_mutex> and protect the const accessor get with
std::shared_lock<std::shared_mutex> so reads can run concurrently while writes
are exclusive.

In `@tools/native/build-native.ps1`:
- Around line 1-5: The script is missing a -DryRun switch and a comment-based
help block; add a [switch]$DryRun parameter to the param() list and thread it
into the existing Invoke-WindowsBuild and Invoke-WslBuild calls so those
functions gate cmake/wsl.exe invocations and artifact copy operations when
$DryRun is set, and add a PowerShell comment-based help block (.SYNOPSIS,
.DESCRIPTION, .NOTES, .EXAMPLE) at the top of the file documenting Mode,
Configuration, BuildDir and the DryRun smoke path and the Windows live
assumptions; ensure the help block and the new DryRun param name exactly match
(DryRun) so callers and help lookups work consistently.
- Around line 160-174: The WSL path conversion in Invoke-WslBuild is broken
because Replace("\\", "/") doesn't convert normal Windows backslashes or
translate drive letters; replace the current $linuxScript/$linuxRoot Replace
calls with calls to wsl.exe wslpath -u to convert each Windows path into a
proper WSL path (run wsl.exe -- wslpath -u "<windowsPath>" and capture/Trim()
the output), then use those converted paths when building $quotedCommand; also
check the wslpath call exit/output and throw a clear error if conversion fails
so the later bash invocation gets valid /mnt/... paths.

In `@tools/research/generate-signature-candidates.ps1`:
- Around line 53-66: $anchorIds is computed but never used; add a
cross-validation step after building $operations that iterates the keys in
$operations (e.g., $operations.Keys or each
value.requiredAnchors/optionalAnchors) and verifies every anchor name exists in
$anchorIds, and if any are missing emit an explicit failure (non-zero exit) with
a clear classification code and message identifying the missing anchors and the
operation (use the symbols $anchorIds and $operations to locate code); ensure
missing artifacts are treated as explicit failures (do not silently skip) and
return an error code per the project's classification guideline.
- Around line 29-36: The output directory creation uses
TestResults/research/$RunId which breaks the per-run isolation rule; update the
branch that sets $outputDir when [string]::IsNullOrWhiteSpace($OutputPath) so it
builds TestResults/runs/$RunId/research (e.g. use Join-Path with
"TestResults/runs/$RunId/research") before creating the directory and assigning
$OutputPath; keep references to $OutputPath, $outputDir and $RunId when
adjusting the path and ensure the else branch behavior (creating parent dir of
$OutputPath) remains unchanged.
- Around line 1-6: Add a deterministic smoke/example invocation to the script
header (a "# Example:" line showing a reproducible invocation that uses the
repo's deterministic fixture for this tool) and replace the two plain-string
throw usages in this file with machine-readable failure messages that include
explicit classification codes; locate the throws (the two "throw" statements in
the script) and change them to include a stable code prefix or structured
payload (e.g., CODE:message or a small structured object) such as an ERR_* code
for missing fixture/invalid input so the failures are parseable by automation
and match the project's classification guidelines.

In `@tools/run-live-validation.ps1`:
- Line 127: The Invoke-LiveTest function emits status with Write-Output which
writes to the success stream and contaminates its return value; change the call
that currently uses Write-Output "=== Running $Name ===" back to a
non-success-stream logger (e.g., Write-Host or Write-Verbose/Write-Information
depending on verbosity policy) so Invoke-LiveTest only returns the TRX path
string; ensure the exact string "=== Running $Name ===" is no longer emitted to
pipeline output and keep the surrounding logic in Invoke-LiveTest unchanged.

---

Nitpick comments:
In `@native/SwfocExtender.Bridge/src/BridgeHostMain.cpp`:
- Around line 33-60: Replace the duplicated EscapeJson implementation in
BridgeHostMain.cpp and NamedPipeBridgeServer.cpp with a single shared internal
utility: create an internal header (e.g., InternalJsonUtils.h) that declares
EscapeJson and provide a single inline/inline-static implementation or a .cpp
implementation (to avoid ODR) that both BridgeHostMain.cpp and
NamedPipeBridgeServer.cpp include; then remove the local EscapeJson definitions
from both files and update includes to use the new header so only one canonical
EscapeJson (function name EscapeJson) is used across the codebase.
- Around line 274-284: On POSIX the program never registers a SIGINT/SIGTERM
handler so g_running stays true and WaitForShutdownSignal spins forever; add a
POSIX signal handler that mirrors CtrlHandler behaviour (sets g_running=false)
and register it from InstallCtrlHandler when not on _WIN32. Specifically,
implement a function (e.g., PosixSignalHandler) that accepts the signal and sets
g_running.store(false), and in InstallCtrlHandler's non-WIN32 branch use
sigaction (or signal) to register handlers for SIGINT and SIGTERM so
WaitForShutdownSignal can exit cleanly.

In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp`:
- Around line 89-116: There are duplicate definitions of EscapeJson in anonymous
namespaces across two translation units; extract that implementation into a
shared internal header (e.g., JsonUtils.hpp) declaring an inline
EscapeJson(const std::string&) function (or place it in a unique internal
namespace) with include guards/pragma once, remove the duplicate definitions
from NamedPipeBridgeServer.cpp and BridgeHostMain.cpp, and `#include` the new
header from both .cpp files so they share the single implementation.

In `@native/SwfocExtender.Core/src/CapabilityProbe.cpp`:
- Around line 5-11: The current markAvailable(const std::string& featureId,
const std::string& reasonCode) always sets entry.state =
CapabilityState::Verified making CapabilityState::Experimental unreachable;
change the API to allow callers to set the state by adding an optional state
parameter (e.g., add a third parameter with default CapabilityState::Verified)
in the declaration in CapabilityProbe.hpp and update the implementation in
CapabilityProbe.cpp to use that parameter when assigning entry.state (leave
CapabilityEntry, capabilities_ and snapshot() behavior unchanged);
alternatively, if Experimental is not needed remove the enumerator and any
references—otherwise add a new markExperimental helper that sets entry.state =
CapabilityState::Experimental and sets capabilities_[featureId] accordingly.

In
`@native/SwfocExtender.Plugins/include/swfoc_extender/plugins/EconomyPlugin.hpp`:
- Around line 27-29: The atomics lockEnabled_ and lockedCreditsValue_ in class
EconomyPlugin are only written (in execute()) but never read (e.g.,
capabilitySnapshot() or any other method); either remove these unused members to
eliminate dead state or add a clear comment above their declarations explaining
they are reserved for future public state exposure and will be read by
forthcoming methods, and ensure execute() remains the single writer; if you
intend them to be consumed now, update capabilitySnapshot() or the appropriate
accessor to read them instead.

In `@native/SwfocExtender.Plugins/src/EconomyPlugin.cpp`:
- Around line 30-32: Multiple independent atomic stores (hookInstalled_,
lockEnabled_, lockedCreditsValue_) can produce torn intermediate states visible
to concurrent readers (e.g., execute()); fix by grouping these fields into a
single snapshot and performing an atomic swap of the snapshot or a mutex-guarded
assignment. Define a StateSnapshot struct containing hookInstalled, lockEnabled
and lockedCreditsValue, replace the three atomics with either a
std::atomic<std::shared_ptr<StateSnapshot>> statePtr_ and publish a new snapshot
with statePtr_.store(std::make_shared<StateSnapshot>{...}) in the place where
hookInstalled_/lockEnabled_/lockedCreditsValue_ were set, or protect a single
non-atomic StateSnapshot with a std::mutex and assign/read under the mutex;
update execute() and any readers to load the shared_ptr (or lock the mutex) and
read the consistent snapshot.

In `@tools/research/generate-signature-candidates.ps1`:
- Line 74: sourceProfilePath currently uses
[System.IO.Path]::GetFullPath($ProfilePath) which bakes an absolute,
machine-specific path into signature-pack.json; change it to compute and store a
repository-relative path instead (e.g., use
[System.IO.Path]::GetRelativePath($RepoRoot, $ProfilePath) or compute a relative
URI from $RepoRoot/$PSScriptRoot to $ProfilePath) so artifacts are portable and
don't leak CI/dev paths; update the assignment to sourceProfilePath to produce
the repo-relative string and ensure downstream consumers still resolve it
relative to the repository root when reading signature-pack.json.
- Line 79: The current write uses Set-Content -Encoding UTF8 which emits a UTF-8
BOM on Windows PowerShell 5.1; replace the Set-Content call that writes ($result
| ConvertTo-Json -Depth 12) to $OutputPath with a .NET write that uses a UTF8
encoding without BOM (e.g. call [System.IO.File]::WriteAllText($OutputPath,
$jsonString, [System.Text.UTF8Encoding]::new($false))) so the JSON is written
BOM-less and behaves consistently across PowerShell 5.1 and 7+.

Comment on lines +32 to +87
std::string ExtractStringValue(const std::string& json, const std::string& key) {
const auto quotedKey = "\"" + key + "\"";
auto keyPos = json.find(quotedKey);
if (keyPos == std::string::npos) {
return {};
}

auto colonPos = json.find(':', keyPos + quotedKey.length());
if (colonPos == std::string::npos) {
return {};
}

auto firstQuote = json.find('"', colonPos + 1);
if (firstQuote == std::string::npos) {
return {};
}

auto secondQuote = json.find('"', firstQuote + 1);
if (secondQuote == std::string::npos || secondQuote <= firstQuote) {
return {};
}

return json.substr(firstQuote + 1, secondQuote - firstQuote - 1);
}

std::string ExtractObjectJson(const std::string& json, const std::string& key) {
const auto quotedKey = "\"" + key + "\"";
auto keyPos = json.find(quotedKey);
if (keyPos == std::string::npos) {
return "{}";
}

auto colonPos = json.find(':', keyPos + quotedKey.length());
if (colonPos == std::string::npos) {
return "{}";
}

auto openBrace = json.find('{', colonPos + 1);
if (openBrace == std::string::npos) {
return "{}";
}

auto depth = 0;
for (std::size_t i = openBrace; i < json.size(); ++i) {
if (json[i] == '{') {
++depth;
} else if (json[i] == '}') {
--depth;
if (depth == 0) {
return json.substr(openBrace, i - openBrace + 1);
}
}
}

return "{}";
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Naive JSON helpers are fragile: escaped quotes break ExtractStringValue, braces inside strings corrupt ExtractObjectJson.

ExtractStringValue (Lines 44–54) finds the closing delimiter with a bare json.find('"', firstQuote + 1), so a value like "foo\"bar" is truncated to "foo\" — the backslash is not treated as an escape sequence.

ExtractObjectJson (Lines 69–86) counts {/} bytes without skipping string content, so {"key":"val{ue"} inflates the depth count and returns the wrong span.

Both helpers are used for commandId, featureId, and payloadJson extraction. While the current IPC payloads are generated by trusted C# test code (making exploitation unlikely), invalid intermediate state during any future free-form payload expansion could cause silent misrouting. Consider using a minimal real JSON parser (e.g., nlohmann/json single-header) for robustness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 32 -
87, ExtractStringValue and ExtractObjectJson are fragile by manually scanning
for quotes/braces and fail on escaped quotes or braces inside strings; replace
both helpers with a proper JSON parse using a real parser (e.g., include the
nlohmann::json single-header), parse the incoming json into a nlohmann::json
object, then retrieve string fields with at(key).get<std::string>() for
commandId/featureId and retrieve payloadJson by serializing the nested object
(or using dump()) and handle parse errors by catching
nlohmann::json::parse_error to return empty/default values; update any callers
that used ExtractStringValue/ExtractObjectJson to use the new parsed-object
approach (functions: ExtractStringValue, ExtractObjectJson) so escaped
characters and strings are handled correctly.

Comment on lines +132 to +143
#if defined(_WIN32)
HANDLE CreateBridgePipe(const std::string& fullPipeName) {
return CreateNamedPipeA(
fullPipeName.c_str(),
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
PIPE_UNLIMITED_INSTANCES,
kPipeBufferSize,
kPipeBufferSize,
0,
nullptr);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

ReadCommandLine silently truncates any command larger than 16 KB due to message-mode pipe semantics.

The pipe is created with PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE (Line 137). In message mode, when the payload exceeds kPipeBufferSize, ReadFile returns FALSE with GetLastError() == ERROR_MORE_DATA — causing the while (ReadFile(...) && bytesRead > 0) condition to short-circuit before any append. The result is an empty commandLine, producing a spurious "missing commandId" error response.

The simplest fix is to switch the pipe to byte mode since \n-framing is already used:

🔧 Proposed fix — byte-mode pipe + explicit `ERROR_MORE_DATA` loop
-        PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
+        PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT,

Alternatively, keep message mode and handle ERROR_MORE_DATA inside ReadCommandLine:

 std::string ReadCommandLine(HANDLE pipe, std::array<char, kPipeBufferSize>& buffer) {
     std::string commandLine;
     DWORD bytesRead = 0;
-    while (ReadFile(pipe, buffer.data(), static_cast<DWORD>(buffer.size()), &bytesRead, nullptr) && bytesRead > 0) {
+    BOOL ok = FALSE;
+    while ((ok = ReadFile(pipe, buffer.data(), static_cast<DWORD>(buffer.size()), &bytesRead, nullptr)) || GetLastError() == ERROR_MORE_DATA) {
+        if (bytesRead == 0) break;
         commandLine.append(buffer.data(), bytesRead);

Also applies to: 152-166

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@native/SwfocExtender.Bridge/src/NamedPipeBridgeServer.cpp` around lines 132 -
143, The pipe is created in CreateBridgePipe with message-mode flags which
causes ReadCommandLine to silently drop messages larger than kPipeBufferSize;
change the pipe to byte mode by removing PIPE_TYPE_MESSAGE and
PIPE_READMODE_MESSAGE (use byte mode) in CreateBridgePipe, and/or update
ReadCommandLine to explicitly loop and handle ERROR_MORE_DATA from ReadFile
(accumulate partial reads until no more data) so commands > kPipeBufferSize are
correctly appended; apply the same change for the other CreateNamedPipe usage
referenced in the diff (the 152-166 area) and ensure ReadCommandLine still
relies on '\n' framing after switching to byte mode.

Comment on lines +445 to +456
if ($RequireNonBlockedClassification) {
if (-not $EmitReproBundle) {
$fatalError = [InvalidOperationException]::new("-RequireNonBlockedClassification requires repro-bundle emission.")
}
elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) {
$fatalError = [InvalidOperationException]::new("Hard gate could not determine repro-bundle classification.")
}
elseif ($bundleClassification -in @("blocked_environment", "blocked_profile_mismatch")) {
$fatalError = [InvalidOperationException]::new(
"Hard gate failed: classification '$bundleClassification' is blocked. Launch with correct process context and rerun.")
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Root-cause exception is silently replaced when bundle generation fails and the hard gate is active

When collect-mod-repro-bundle.ps1 or validate-repro-bundle.ps1 throws, the catch block at line 441 sets $fatalError to the original exception. Control then falls into the $RequireNonBlockedClassification block; because $bundleClassification is still "", the IsNullOrWhiteSpace branch (line 449–451) overwrites $fatalError with a generic "Hard gate could not determine repro-bundle classification" — discarding the original root-cause exception entirely. Only a Write-Warning message survives. The final throw $fatalError at line 567 then throws the gate error, making the actual failure opaque.

Consider preserving the original exception either as an inner exception or by incorporating it in the gate error's message:

🛡️ Proposed fix
-    elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) {
-        $fatalError = [InvalidOperationException]::new("Hard gate could not determine repro-bundle classification.")
+    elseif ([string]::IsNullOrWhiteSpace($bundleClassification)) {
+        $rootCause = if ($null -ne $fatalError) { " Root cause: $($fatalError.Message)" } else { "" }
+        $fatalError = [InvalidOperationException]::new(
+            "Hard gate could not determine repro-bundle classification.$rootCause")
     }

Hardcoded blocked-value list will silently miss future schema values

The gate only checks "blocked_environment" and "blocked_profile_mismatch". If the repro-bundle schema introduces additional "blocked_*" classification values, they will pass the gate undetected. A prefix check is more future-proof:

♻️ Proposed refactor
-    elseif ($bundleClassification -in @("blocked_environment", "blocked_profile_mismatch")) {
+    elseif ($bundleClassification.StartsWith("blocked_", [System.StringComparison]::OrdinalIgnoreCase)) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/run-live-validation.ps1` around lines 445 - 456, The current hard-gate
block using $RequireNonBlockedClassification overwrites the original exception
in $fatalError when bundle generation/validation fails and $bundleClassification
is empty; modify the logic around the $RequireNonBlockedClassification check to
preserve the original exception by wrapping it as an inner/inner-like exception
or appending its message to the new InvalidOperationException (e.g., when
setting $fatalError keep the original exception object and include its
Message/ToString() in the new exception), and change the blocked-value test from
an explicit list ($bundleClassification -in
@("blocked_environment","blocked_profile_mismatch")) to a prefix check (e.g.,
test whether $bundleClassification -like "blocked_*" or use -match '^blocked_')
so future blocked_* classifications are caught; ensure the final throw
$fatalError still carries the original failure details.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5dd08b1fbc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

$backendRouteDecision = if ($null -ne $runtimeEvidence) {
[ordered]@{
backend = if ([string]::IsNullOrWhiteSpace([string]$runtimeEvidence.result.backendRoute)) { "unknown" } else { [string]$runtimeEvidence.result.backendRoute }
allowed = [bool]$runtimeEvidence.result.succeeded

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Derive route allowance from route diagnostics

backendRouteDecision.allowed is currently tied to result.succeeded, which conflates backend routing with final action outcome. If routing succeeds but the action later fails (for example due to payload/readback/runtime errors), the repro bundle will incorrectly report the route as disallowed, producing misleading evidence for hard-gate triage and reason-code analysis.

Useful? React with 👍 / 👎.

$hookInstallReport = if ($null -ne $runtimeEvidence) {
[ordered]@{
state = if ([string]::IsNullOrWhiteSpace([string]$runtimeEvidence.result.hookState)) { "unknown" } else { [string]$runtimeEvidence.result.hookState }
reasonCode = if ([bool]$runtimeEvidence.result.succeeded) { "CAPABILITY_PROBE_PASS" } else { "HOOK_INSTALL_FAILED" }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Stop inferring hook install failure from action success

This assigns HOOK_INSTALL_FAILED whenever the action result is unsuccessful, even if hook installation actually succeeded and the failure happened later in execution. In runs where set_credits fails after routing (for example due to write/readback issues), the bundle will incorrectly classify hook lifecycle health, which undermines the reliability evidence contract.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:app WPF app/UI/ViewModel area:ci CI/CD and automation area:docs Documentation/process related work. area:profiles Profile metadata, inheritance, manifest area:runtime Runtime attach, memory actions, symbol resolution area:tooling Scripts and developer tooling needs-reviewer No eligible non-author reviewer available Review effort 4/5

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant